www.vce.com VCE Vblock System 240 Gen 3.1 Architecture Overview



Similar documents
VCE Vblock System 340 Gen 3.2 Architecture Overview

EMC VSPEX END-USER COMPUTING

VCE VBLOCK 300 SERIES OVERVIEW

VCE Vision Intelligent Operations Version 2.5 Technical Overview

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

Vblock Infrastructure Platforms 2010 Vblock Platforms Architecture Overview

Building the Virtual Information Infrastructure

Unified Computing Systems

Cisco Solution for EMC VSPEX End-User Computing

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Virtual SAN Design and Deployment Guide

EMC VNX-F ALL FLASH ARRAY

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

VCE VBLOCK SYSTEMS DEPLOYMENT AND IMPLEMENTATION: STORAGE EXAM

Frequently Asked Questions: EMC UnityVSA

EMC Business Continuity for Microsoft SQL Server 2008

VMware vsphere-6.0 Administration Training

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

VXRACK SYSTEM Product Overview DATA SHEET

VMware vsphere 5.1 Advanced Administration

Configuration Maximums

VMware vsphere: [V5.5] Admin Training

VMware vsphere: Install, Configure, Manage [V5.0]

Vmware VSphere 6.0 Private Cloud Administration

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

VMware vsphere Design. 2nd Edition

EMC Integrated Infrastructure for VMware

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE IN PHYSICAL AND VIRTUAL ENVIRONMENTS

UCS M-Series Modular Servers

VMware vsphere: Fast Track [V5.0]

vsphere Private Cloud RAZR s Edge Virtualization and Private Cloud Administration

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

VMware vsphere 5.0 Boot Camp

VMware Virtual SAN Design and Sizing Guide TECHNICAL MARKETING DOCUMENTATION V 1.0/MARCH 2014

Setup for Failover Clustering and Microsoft Cluster Service

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

Khóa học dành cho các kỹ sư hệ thống, quản trị hệ thống, kỹ sư vận hành cho các hệ thống ảo hóa ESXi, ESX và vcenter Server

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Springpath Data Platform with Cisco UCS Servers

NET ACCESS VOICE PRIVATE CLOUD

EMC Integrated Infrastructure for VMware

VCE Vision Intelligent Operations Version 2.6 Technical Overview

How To Build A Cisco Ukcsob420 M3 Blade Server

How To Write An Article On An Hp Appsystem For Spera Hana

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

EMC VNXe HIGH AVAILABILITY

VCE VBLOCK SYSTEMS DEPLOYMENT AND IMPLEMENTATION: VIRTUALIZATION EXAM

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

VBLOCK SOLUTION FOR SAP APPLICATION HIGH AVAILABILITY

Evaluation of Enterprise Data Protection using SEP Software

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

Reference Architecture

EMC BACKUP-AS-A-SERVICE

FOR SERVERS 2.2: FEATURE matrix

VMware Virtual SAN Hardware Guidance. TECHNICAL MARKETING DOCUMENTATION v 1.0

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

EMC VSPEX END-USER COMPUTING SOLUTION

Vblock Solution for Citrix XenDesktop and XenApp

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere

Expert Reference Series of White Papers. Visions of My Datacenter Virtualized

DVS Enterprise. Reference Architecture. VMware Horizon View Reference

Configuration Maximums

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

What s New in VMware Virtual SAN TECHNICAL WHITE PAPER

Cisco Solution for EMC VSPEX End-User Computing

RFP-MM Enterprise Storage Addendum 1

Cisco Unified Computing System and EMC VNXe3300 Unified Storage System

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

EMC VSPEX SOLUTION FOR INFRASTRUCTURE AS A SERVICE WITH VMWARE VCLOUD SUITE

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led

High-Availability Fault Tolerant Computing for Remote and Branch Offices HA/FT solutions for Cisco UCS E-Series servers and VMware vsphere

EMC VNXe3200 UFS64 FILE SYSTEM

Introduction to VMware EVO: RAIL. White Paper

VBLOCK SOLUTION FOR KNOWLEDGE WORKER ENVIRONMENTS WITH VMWARE VIEW 4.5

Data Centre of the Future

Cisco Solution for EMC VSPEX Server Virtualization

MANAGEMENT AND ORCHESTRATION WORKFLOW AUTOMATION FOR VBLOCK INFRASTRUCTURE PLATFORMS

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

MAXIMIZING AVAILABILITY OF MICROSOFT SQL SERVER 2012 ON VBLOCK SYSTEMS

A virtual SAN for distributed multi-site environments

DCICT: Introducing Cisco Data Center Technologies

SAN Conceptual and Design Basics

SAP Landscape Virtualization Management Version 2.0 on VCE Vblock System 700 series

Private cloud computing advances

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

VBLOCK TM INFRASTRUCTURE PLATFORMS: A TECHNICAL OVERVIEW

VMware vsphere 4.1 with ESXi and vcenter

Dell Desktop Virtualization Solutions Simplified. All-in-one VDI appliance creates a new level of simplicity for desktop virtualization

Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0(4)SV1(1)

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

EMC PROVEN END-USER COMPUTING SOLUTION ENABLED BY EMC VMAX

Transcription:

www.vce.com VCE Vblock System 240 Gen 3.1 Architecture Overview Document revision 1.1 August 2015

Vblock 240 Architecture Overview Revision history Revision history Date Vblock System Document revision Description of changes August 2015 Gen 3.1 1.1 Added IPI cabinet information August 2014 Gen 3.0 1.0 Initial release 2

Contents Vblock 240 Architecture Overview Contents Introduction...5 Accessing VCE documentation...6 Overview...7 Vblock overview... 7 Architecture summary... 9 Base configurations and scaling...10 Connectivity overview...11 Compute layer hardware...14 Cisco UCS C220 server... 14 Cisco Trusted Platform Module... 14 Scaling up compute resources... 15 Bare metal support... 15 Storage layer overview...17 Storage layer hardware... 17 Storage features support...17 Scaling up storage resources...19 EMC Secure Remote Support...22 Network layer hardware... 23 Network overview... 23 Virtualization... 25 Virtualization overview...25 VMware vsphere Hypervisor ESXi...25 VMware vcenter Server... 27 Management components...29 Management components overview...29 Management hardware components...29 Management software components... 30 Management network connectivity... 31 Vblock System 240 AMP-2V network connectivity... 31 Vblock System 240 AMP-2P network connectivity... 33 Vblock System 240 AMP-2LP network connectivity... 35 Vblock System configurations... 38 Cabinets... 38 3

Vblock 240 Architecture Overview Contents Intelligent Physical Infrastructure appliance... 38 Sample Vblock System configurations... 38 Vblock System 240 power options...41 Additional references... 43 References overview...43 Virtualization components... 43 Compute components... 43 Network components...43 Storage components... 44 4

Introduction Vblock 240 Architecture Overview Introduction This document describes the high-level design of Vblock System 240. This document also describes the hardware and software components that VCE includes in Vblock 240. The target audience for this document includes sales engineers, field consultants, advanced services specialists, and customers who want to deploy a virtualized infrastructure using Vblock 240. The VCE Glossary provides terms, definitions, and acronyms that are related to Vblock Systems. To suggest documentation changes and provide feedback on this book, send an e-mail to docfeedback@vce.com. Include the name of the topic to which your feedback applies. Related information Accessing VCE documentation (see page 6) 5

Vblock 240 Architecture Overview Accessing VCE documentation Accessing VCE documentation Select the documentation resource that applies to your role. Role Customer Cisco, EMC, VMware employee, or VCE Partner VCE employee Resource support.vce.com A valid username and password are required. Click VCE Download Center to access the technical documentation. partner.vce.com A valid username and password are required. sales.vce.com/saleslibrary or vblockproductdocs.ent.vce.com 6

Overview Vblock 240 Architecture Overview Overview Vblock System 240 overview The Vblock System 240 has the following features: Standardized Vblock System cabinets with multiple North American and international power solutions Support for multiple features of the EMC operating environment for EMC VNX2 Advanced Management Platform (AMP-2) for Vblock System management Unified network architecture provides the option to leverage Cisco Nexus switches to support IP and SAN without the use of Cisco MDS switches The Vblock 240 contains the following key hardware and software components: Resource Components Vblock System management Virtualization and management VCE Vision Intelligent Operations System Library VCE Vision Intelligent Operations Plug-in for vcenter VCE Vision Intelligent Operations Compliance Checker VCE Vision Intelligent Operations API for System Library VCE Vision Intelligent Operations API for Compliance Checker VMware vsphere ESXi VMware vcenter Server VMware Single Sign-On (SSO) Service (version 5.5 and higher) VMware vsphere Server Enterprise Plus EMC PowerPath/VE or VMware Native Multipathing (NMP) Cisco Integrated Management Controller (CIMC) EMC Unisphere Manager EMC VNX Local Protection Suite EMC VNX Remote Protection Suite EMC VNX Application Protection Suite EMC VNX Fast Suite EMC VNX Security and Compliance Suite EMC Secure Remote Support (ESRS) Cisco UCS C220 Server for AMP-2 Compute Cisco UCS C220 Server Cisco UCS Virtual Interface Card (VIC) 1225 7

Vblock 240 Architecture Overview Overview Resource Components Network Cisco Nexus 1000V Series Virtual Switches Cisco Nexus 5548UP Switches Cisco Nexus 3048 Switch (Optional) VMware Virtual Distributed Switch (VDS) (VMware vsphere 5.5 and higher) Storage EMC VNX5200 The VCE Vblock System Release Certification Matrix lists the certified versions of components for the Vblock 240. For information about Vblock System management, refer to the VCE Vision Intelligent Operations Technical Overview. Related information Accessing VCE documentation (see page 6) 8

Overview Vblock 240 Architecture Overview Architecture summary The following table summarizes the Vblock System 240 architecture: Feature Boot path Cabinet Compute Datastore type Description SAN One 42 rack unit Four to twelve Cisco UCS C220 M3 servers with one or two CPUs Memory options 64 GB (8 x 8 GB) 96 GB (4 x 16 GB and 4 x 8 GB) - Single CPU 96 GB (8 x 8 GB) - Dual CPU 128 GB (8 x 16 GB) - Single CPU 128 GB (16 x 8 GB) - Dual CPU 192 GB (8 x 16 GB and 8 x 8 GB) - Dual CPU 256 GB (16 x 16 GB) - Dual CPU CPU options E5-2630Lv2 (6 cores, 2.40 GHz, 60 W) E5-2637v2 (4 cores, 3.50 GHz, 130 W) E5-2680v2 (10 cores, 2.80 GHz, 115 W) Block = VMFS Unified = NFS and VMFS Disk drives Minimum configuration = 11 Maximum configuration = 105 Management switch Storage access Storage array One Cisco Nexus 3048 Switch Block or unified EMC VNX5200 Storage back-end SAS buses 2 Storage capacity 25 drive DPE Two 25 drive DAEs for EFD and SAS drives Two 15 drive DAEs for NL-SAS drives Storage protocol Block = FC Unified = FC, NFS, and CIFS Unified Ethernet/SAN switches Two Cisco Nexus 5548UP Switches for Ethernet and FC traffic Unified storage X-Blades 0 or 2 Unified storage control stations 0 or 2 9

Vblock 240 Architecture Overview Overview Base configurations and scaling Vblock System 240 is available in many different configurations. Each configuration provides a different level of compute and storage capacity based upon the components contained in the Vblock 240. These components are contained in a 19-inch 42U cabinet. The following hardware aspects can be customized: Hardware Compute servers Data Mover Enclosure (DME) packs Storage Storage hardware How it can be customized Four to twelve Cisco UCS C220 servers with CPU and memory configuration options. One DME (two X-Blades) and two control stations can be added as part of a unified storage addition. EMC VNX5200 Drive flexibility for up to three tiers of storage per pool, drive quantities in each tier, the RAID protection for each pool, and the number of disk array enclosures (DAEs). Supported disk drives 100/200/400 GB 2.5" solid state drive (extreme performance) 600/900 GB 10K RPM 2.5" SAS (performance) 2/3/4 TB 7.2 K RPM 3.5" NL-SAS (capacity) Supported RAID types Tier 1: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2) or (14+2) Tier 2: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2) or (14+2) Tier 3: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2) or (14+2) Management hardware Advanced Management Platform (AMP-2) centralizes management of Vblock System components. Together, the components offer balanced compute, I/O bandwidth, and storage capacity relative to the servers and storage arrays in the Vblock System. All components have 1+1 or N+1 redundancy. These resources can be scaled up as necessary to meet increasingly stringent requirements. Vblock Systems are designed to keep hardware changes to a minimum should the customer decide to change the storage protocol after installation (for example, from block storage to unified storage). Cabinet space is reserved for all components that are needed for each storage configuration, all fitting in a single 42U cabinet. Related information Management hardware components (see page 29) Scaling up compute resources (see page 15) Scaling up storage resources (see page 19) 10

Overview Vblock 240 Architecture Overview Storage overview (see page 17) Connectivity overview Components and interconnectivity within Vblock Systems are conceptually subdivided into three layers: Layer Compute Network Storage Description Contains the components that provide the computing power within the Vblock System and host the virtual machines. The Cisco UCS C220 Servers belong in this layer. Contains the components that provide switching and connectivity between the compute and storage layers within a Vblock System, and between the Vblock System and the customer network. The Cisco Nexus 3048 Switch and the Cisco Nexus 5548UP switches belong in this layer. EMC VNX5200 All components incorporate redundancy into the design. 11

Vblock 240 Architecture Overview Overview The following illustration provides a high level overview of the components with redundant network connectivity: The VCE Vblock System 240 Port Assignments Reference provides information about the assigned use for each port in the Vblock System. 12

Overview Vblock 240 Architecture Overview Related information Cisco UCS C220 server (see page 14) Network overview (see page 23) Storage overview (see page 17) 13

Vblock 240 Architecture Overview Compute layer hardware Compute layer hardware Cisco UCS C220 server The Cisco UCS C220 server is a high-density, two-socket, one rack unit (RU) rack-mount server that is built for production-level network infrastructure, web services, and mainstream data center, branch, and remote-office applications. Vblock System 240 contains from four to twelve Cisco UCS C220 servers. The Cisco UCS C220 server provides: One or two processors The following CPU types: E5-2680v2 (10 cores, 2.80 GHz, 115W) E5-2630Lv2 (6 cores, 2.40 GHz, 60W) E5-2637v2 (4 cores, 3.50 GHz, 130W) 64 GB, 96 GB, 128 GB, 192 GB, or 256 GB of memory. 192 GB and 256 GB of memory are supported only with two CPUs. Cisco UCS Virtual Interface Card (VIC) 1225 for converged networking Virtualization optimization The VCE Vblock System 240 Port Assignments Reference provides information about the assigned use for each port in the Vblock System. Related information Accessing VCE documentation (see page 6) Cisco Trusted Platform Module The Cisco Trusted Platform Module (TPM) is a computer chip that securely stores artifacts such as measurements, passwords, certificates, or encryption keys, that are used to authenticate the Vblock System. The Cisco TPM provides authentication and attestation services that enable safer computing in all environments. The Cisco TPM is available by default in Vblock Systems as a component within some Cisco UCS Blade Servers and Rack Servers and is shipped disabled. For more information, refer to the VCE Vblock Systems Blade Packs Reference. 14

Compute layer hardware Vblock 240 Architecture Overview VCE supports Cisco TPM hardware but does not support the Cisco TPM functionality. Using Cisco TPM features involves using a software stack from a vendor with significant domain experience in trusted computing. Consult your software stack vendor for configuration and operational considerations relating to the Cisco TPMs. Related information Accessing VCE documentation (see page 6) Scaling up compute resources Vblock System 240 contains from four to twelve Cisco UCS C220 servers. You can add Cisco UCS C220 servers, as needed, up to the maximum number of servers. Servers are added as needed, in single-server increments. Related information Base configurations and scaling (see page 10) Bare metal support This topic describes the VCE bare metal support policy. Note: An AMP-2 option is available to support the Vblock System 240 bare metal configuration; the AMP-2 minimum physical deployment is required. Because many applications cannot be virtualized for technical and commercial reasons, Vblock Systems support bare metal deployments, that is, non-virtualized operating systems and applications as described in this topic. While it is possible for a Vblock System to support these workloads (with caveats noted below), due to the nature of bare metal deployments, VCE provides only reasonable-effort support for systems that comply with the following requirements: The Vblock System contains only VCE published, tested, and validated hardware and software components. The VCE Release Certification Matrix provides a list of the certified versions of components for Vblock Systems. The operating systems used on bare metal deployments for compute and storage components must comply with the published hardware and software compatibility guides from Cisco and EMC. For bare metal deployments that include other hypervisor technologies (Hyper-V, KVM, etc.), those hypervisor technologies are not supported by VCE. VCE Support is provided only on VMware Hypervisors. 15

Vblock 240 Architecture Overview Compute layer hardware VCE reasonable effort support includes VCE acceptance of customer calls, a determination of whether the Vblock System is operating correctly, and assistance in problem resolution to the extent possible. VCE is unable to reproduce problems or provide support on the operating systems and applications installed on bare metal deployments. In addition, VCE does not provide updates to or test those operating systems or applications. Customers must work directly with their OEM support vendor for issues and patches related to those operating systems and applications. Bare metal workloads require separate physical disk drives, disk groups, and storage pools to ensure that bare metal workloads do not adversely impact Vblock System workloads. Related information Accessing VCE documentation (see page 6) Vblock System 240 AMP-2LP network connectivity (see page 35) 16

Storage layer hardware Vblock 240 Architecture Overview Storage layer hardware Storage overview The storage layer in a Vblock System 240 consists of an EMC VNX5200 storage array. EMC VNX series is composed of fourth-generation storage platforms that deliver industry-leading capabilities. They offer a unique combination of flexible, scalable hardware design, and advanced software capabilities that enable them to meet the diverse needs of today s organizations. EMC VNX series platforms support block and unified storage. The platforms are optimized for VMware virtualized applications. They feature flash drives for extendable cache and high performance in the virtual storage pools. Automation features include self-optimized storage tiering and application-centric replication. The storage array consists of up to five disk enclosures (DPEs and DAEs) that contain disk drives. These connect to dual storage processors (SPs) over six GB four-lane serial attached SCSI (SAS). Fibre Channel (FC) expansion cards within the storage processors connect to the Cisco Nexus 5548UP switches within the network layer over FC. Regardless of the storage protocol implemented at startup (block or unified), cabinet space can accommodate cabling and power to support future hardware expansion for either of these storage protocols. This arrangement makes it easier to move from block storage to unified storage with minimal hardware changes. However, all components must fit in the single 42U cabinet. Note: All EMC VNX components are installed in VCE cabinets, in VCE-specific layouts. Related information Base configurations and scaling (see page 10) Storage features support (see page 17) Storage features support This topic presents additional storage features available on the Vblock System 240. Support for array hardware or capabilities The EMC VNX operating environment provides support for the following features: Feature NFS Virtual X-Blades VDM (Multi-LDAP Support) Description Virtual data movers provide security and segregation for service provider environmental clients. 17

Vblock 240 Architecture Overview Storage layer hardware Feature vstorage API for Array Integration (VAAI) enhancement Data-in-place block compression Compression for file/ display compression capacity savings EMC VNX snapshots Description NFS snap-on-snap of VMDK files to one level of depth. For example,. fast clone of a fast clone that is a second level clone of a base image of a virtual machine When compression is enabled, thick LUNs are converted to thin LUNs and are compressed in place. RAID group LUNs are migrated into a pool during compression. There is no need for additional space to start compression. Decompression temporarily requires additional space since it is a migration and not an in-place decompression. Two available types of file compression are: Fast compression (default) Deep compression (up to 30% more space efficient, but slower and with higher CPU usage) Displays capacity savings due to compression to inform the user of savings from compression (separate from savings due to thin LUNs), so that users can do a cost/benefit comparison (space savings versus performance impact). EMC VNX snapshots are for storage pools only, not for RAID groups. Storage pools can use EMC SnapView snapshots and EMC VNX snapshots at the same time. Note: This feature is optional. Both types of snapshots have seamless support. VCE relies on guidance from EMC best practices for different use cases of EMC SnapView snapshots versus EMC VNX snapshots. Correlated statistics enhancements Correlated statistics capture real-time statistics about I/O activity on EMC VNX file storage. CIFS and NFS statistics are correlated by client IP addresses, CIFS/NFS users, and disk volumes. The following indexes are added to the correlated statistical data: NFS groups (lists the 20 most active NFS groups for an X-Blade) NFS users (lists the 20 most active NFS users for an X-Blade) NFS file systems (lists the top 20 NFS operations per client per file system. This can be correlated by file system, client IP, or NFS operation). CIFS statistics/file system statistics: Files in a quota tree (lists the 20 most active files in each quota tree) Files in a file system (lists the 20 most active files in each file system). NFS statistics: NFS exports (lists the 20 most active exports on an X-Blade) NFS groups: CIFS servers (lists the top 20 most active CIFS servers on an X-Blade) CIFS clients (by host name) Hardware features VCE implements the following EMC VNX features in Vblock 240: Dual 10 Gigabit Ethernet Active Twinax IP IO/SLIC between the X-Blades and Cisco Nexus 5548UP Switches 18

Storage layer hardware Vblock 240 Architecture Overview Utilization of 2½ inch vault drives 2½ inch and 3½ inch DAEs and drive form factors File deduplication File deduplication is supported but is not enabled by default. Enabling this feature requires knowledge of capacity and storage requirements. Block compression Block compression is supported but is not enabled by default. Enabling this feature requires knowledge of capacity and storage requirements. External NFS and CIFS access Vblock System 240 presents CIFS shares and NFS file systems to internal and external clients. CIFS shares and NFS file systems are connected to both internal virtual machines and to external client systems at the same time on a single share system. VMware NFS datastores are connected to both internal and external hosts, but not at the same time. If both internal and external NFS datastores are required, two NFS file systems must be created; one for internal hosts and one for external hosts. Snapshots For block storage snapshots, EMC SnapView is supported. This software provides local block replication using snaps and clones without the extra cost of the optional EMC RecoverPoint Appliances. This software is included in the EMC VNX Local Protection Suite. For NAS file system snapshots, EMC SnapSure is supported. It is included in the EMC VNX Local Protection Suite. EMC VNX Snapshots creates point-in-time data copies. Replicas For Vblock System NAS configurations, EMC VNX Replicator is supported. This software can create local clones (full copies) and replicate file systems asynchronously across IP networks. EMC VNX Replicator is included in the EMC VNX Remote Protection Suite. Scaling up storage resources This topic describes the RAID packs option and the disk array enclosures option for scaling up storage resources. To scale up storage resources, you can: Add RAID packs Add disk array enclosures 19

Vblock 240 Architecture Overview Storage layer hardware Packs and disk array enclosures are added when Vblock Systems are built and after they are deployed. RAID packs Storage capacity is increased by adding RAID packs. Each pack contains a number of drives of a given type, speed, and capacity. The number of drives in a pack depends on the RAID level that it supports. The number and types of RAID packs included in a Vblock System is based on the following: The number of storage pools that are needed. The storage tiers that each pool contains, and the speed and capacity of the drives in each tier. The following table lists tiers, supported drive types, and supported speeds and capacities. Note: The speed and capacity of all drives within a given tier in a given pool must be the same. Tier Drive type Supported speeds and capacities 1 Solid-state Enterprise Flash drives (EFD) (extreme performance) 100 GB 2.5" 200 GB 2.5" 400 GB 2.5" 2 Serial attached SCSI (SAS) (performance) 600 GB 10K RPM 2.5" 900 GB 10K RPM 2.5" 3 Nearline SAS (capacity) 2 TB 7.2K RPM 3.5" 3 TB 7.2K RPM 3.5" 4 TB 7.2K RPM 3.5" The RAID protection level for the tiers in each pool. The following table describes each supported RAID protection level. The RAID protection level for the different pools vary. RAID protection level Description RAID 1/0 A set of mirrored drives. Offers the best overall performance of the three supported RAID protection levels. Offers robust protection. Can sustain double-drive failures that are not in the same mirror set. Lowest economy of the three supported RAID levels since usable capacity is only 50% of raw capacity. 20

Storage layer hardware Vblock 240 Architecture Overview RAID protection level Description RAID 5 Block-level striping with a single parity block, where the parity data is distributed across all of the drives in the set. Offers the best mix of performance, protection, and economy. Has a higher write performance penalty than RAID 1/0 because multiple I/Os are required to perform a single write. With single parity, can sustain a single drive failure with no data loss. Vulnerable to data loss or unrecoverable read errors on a track during a drive rebuild. Highest economy of the three supported RAID levels. Usable capacity is 80% of raw capacity. RAID 6 Block-level striping with two parity blocks, distributed across all of the drives in the set. Offers increased protection and read performance comparable to RAID 5. Has a significant write performance penalty because multiple I/Os are required to perform a single write. Economy is very good. Usable capacity is 75% of raw capacity or better. EMC best practice for SATA and NL-SAS drives. There are RAID packs for each combination of RAID protection level and tier type. The RAID levels dictate the number of drives that are included in the packs. The following table lists RAID protection levels and the number of drives in the pack for each level: RAID 5 or RAID 1/0 is for performance and extreme performance tiers and RAID 6 is for the capacity tier. RAID protection level RAID 1/0 RAID 5 RAID-6 Number of drives per RAID pack 8 (4 data + 4 mirrors) 5 (4 data + 1 parity) or 9 (8 data + 1 parity) 8 (6 data + 2 parity) or 16 (14 data + 2 parity) Disk array enclosures If the number of RAID packs in a Vblock System is expanded, more disk array enclosures (DAEs) might be required. DAEs are added individually. The base includes the DPE as the first DAE. There are two types of DAEs: a 15 drive 3U enclosure for 3½ inch form factor drives and a 25 drive 2U enclosure for 2½ inch form factor drives. To ensure that the loads are balanced, physical disks will be spread across the DAEs in accordance with best practice guidelines. Related information Base configurations and scaling (see page 10) 21

Vblock 240 Architecture Overview Storage layer hardware EMC Secure Remote Support This topic describes how EMC Secure Remote Support (ESRS) monitors the health of EMC VNX storage arrays. ESRS allows EMC personnel to remotely monitor the health of EMC VNX storage arrays in the Vblock System, and perform support and maintenance functions. ESRS serves as the conduit for all communications between EMC and the EMC VNX storage arrays. ESRS monitors the health of multiple EMC VNX storage arrays in Vblock Systems and elsewhere. EMC ESRS is integrated into the EMC VNX base software suite. Detailed information about ESRS is available at support.emc.com. 22

Network layer hardware Vblock 240 Architecture Overview Network layer hardware Network overview The Vblock System 240 contains an Ethernet/SAN switch and a management switch to allow interconnection of all Vblock System components to each other and to external resources such as backup and recovery servers and the customer's network. Ethernet/SAN switches Each Vblock 240 includes two Cisco Nexus 5548UP Switches to provide Ethernet and Fibre Channel (FC) connectivity: Between the Vblock System internal components. To the site network. To the redundant connections on the Cisco Nexus 3048 switches. To Advanced Management Platform (AMP-2) through redundant connections between AMP-2 and the Cisco Nexus 5548UP switches. The two Cisco Nexus 5548UP Switches support low latency line-rate 10 Gb Ethernet and 8 Gb FC connectivity on up to 32 ports. A unified port expansion module provides an extra 16 ports of 10 GbE or 8 Gb FC connectivity. For Vblock 240, the 16-port expansion module is dedicated for FC connectivity. The FC ports are licensed in packs of eight based on the demand. Management switch Cisco Nexus 3048 Switches in the network layer provide IP connectivity between compute-layer components and storage-layer components. They also provide IP connectivity to the site network. These switches provide configuration flexibility with LAN base, IP base, and IP services software. Vblock 240 uses two 1/10 Gbps SFP+ uplink ports for customer connectivity. Depending on the desired connection, 1GbE copper SFP+, 1GbE Fiber SFP+, or 10Gbe SFP+ transceivers are provided. To facilitate the required level of redundancy and throughput, the customer network uplink device should support port aggregation. The VCE Vblock System 240 Port Assignments Reference provides information about the assigned use for each port in the Vblock System. Related information Accessing VCE documentation (see page 6) 23

Vblock 240 Architecture Overview Network layer hardware Management hardware components (see page 29) Management software components (see page 30) 24

Virtualization Vblock 240 Architecture Overview Virtualization Virtualization overview VMware vsphere storage virtualization is a combination of vsphere features and APIs that provide an abstraction layer for physical storage resources to be addressed, managed and optimized in a virtualization deployment. VMware vsphere is the virtualization platform that provides the foundation for the private cloud. The core VMware vsphere components are the VMware vsphere Hypervisor ESXi and VMware vcenter Server for management. vsphere 5.5, or higher includes a Single Sign-on (SSO) component. The hypervisors are deployed in a cluster configuration and can scale up to 12 nodes per cluster. The cluster allows dynamic allocation of resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility with the use of VMware vmotion and Storage vmotion technology. EMC PowerPath/VE is available for path management. EMC PowerPath/VE is the default configuration and is recommended by VCE. Related information VMware vcenter Server (see page 27) VMware vsphere Hypervisor ESXi (see page 25) VMware vsphere Hypervisor ESXi VMware vsphere Hypervisor ESXi is an operating system that supports a virtualized environment. The VMware vsphere Hypervisor ESXi runs in the second generation of the Advanced Management Platform (AMP-2) and in the Vblock System using VMware vsphere Server Enterprise Plus. This lightweight hypervisor requires very little space to run (less than 6 GB of storage required to install) and has minimal management overhead. VMware vsphere ESXi does not contain a console operating system. On the AMP-2P, the VMware vsphere Hypervisor ESXi boots from the Cisco FlexFlash (SD card). For the Cisco UCS C-series servers, the VMware vsphere Hypervisor ESXi boots from SAN through an independent FC LUN presented from the EMC VNX storage array. The FC LUN also contains the hypervisor's scratch for persistent storage of logs and other diagnostic files to provide stateless computing within the Vblock System. The stateless hypervisor is not supported. 25

Vblock 240 Architecture Overview Virtualization Cluster configuration VMware vsphere ESXi hosts and their resources are pooled together into clusters. These clusters contain the CPU, memory, network, and storage resources to allocate to virtual machines. Clusters scale up to a maximum of 12 hosts. Clusters can support thousands of virtual machines. The clusters can also support a variety of Cisco UCS C-Series servers running inside the same cluster. Note: Some advanced CPU functionality might be unavailable if more than one CPU model is running a cluster. Datastores Vblock Systems support a mixture of datastore types: block level storage using VMFS or file level storage using NFS. The maximum size per VMFS5 volume is 64 TB (50 TB VMFS3 @ 1 MB). VMware 5.5 and higher limits the VMDK file size to 62TB. Each host/cluster supports a maximum of 255 volumes. VCE optimizes the advanced settings for VMware vsphere ESXi hosts that are deployed in Vblock Systems to maximize the throughput and scalability of NFS data stores. The Vblock System supports a maximum of 256 NFS datastores per host. Virtual networks Virtual networking in the AMP-2P uses the standard virtual switches. The Cisco Nexus 1000V distributed virtual switch manages virtual networking in the Vblock System. The Cisco Nexus 1000V Switch ensures consistent, policy-based network capabilities to all servers in the data center by allowing policies to move with a VM during live migration. This provides persistent network, security, and storage compliance. Alternatively, with VMware 5.5 and later, you can select the VMware vsphere Distributed Switch (VDS) to provide virtual networking for the Cisco C-Series servers. Virtual Center manages the VDS and ensures consistent, policy-based networking capabilities, including load-balancing, prioritizing and segregating traffic, and monitoring the health and security of the switch. The Cisco Nexus 1000V Switch and VMware VDS use intelligent network Class of Service (CoS) marking, which match physical switch Quality of Service (QoS) policies to shape network traffic according to workload type and priority. Related information Management hardware components (see page 29) Management software components (see page 30) 26

Virtualization Vblock 240 Architecture Overview VMware vcenter Server VMware vcenter Server is a central management point for the hypervisors and virtual machines. VMware vcenter is installed on a 64-bit Windows Server and runs VMware Update Manager as a service to help manage host patches. VMware Update Manager is installed on a separate Windows Server and runs as a Windows service. VMware vcenter Server allows you to clone virtual machines and create templates. VMware vcenter Server also provides monitoring and alerting capabilities for hosts and virtual machines. Vblock System administrators can create and apply alarms to all managed objects in VMware vcenter Server. These alarms include: Datacenter, cluster, and host health, inventory, and performance Datastore health and capacity Virtual machine usage, performance, and health Virtual network usage and health AMP-2V The second generation of the Virtual Advanced Management Platform (AMP-2V), formerly the Logical Advanced Management Platform (LAMP), resides in a vapp located across two nondedicated compute servers in the Vblock System. AMP-2V shares resources with other production workloads for hardware cost efficiency. Databases The backend database that supports VMware vcenter Server and VMware Update Manager (VUM) is Microsoft SQL Server 2012 for vsphere 5.5. For VMware vsphere 5.5 and higher, a second SQL server database exists for Single Sign-on. The SQL Server service requires a dedicated service account, depending on the security requirements. Authentication Vblock Systems support the VMware vcenter Single Sign-On (SSO) Service capable of the integration of multiple identity sources including Active Directory, Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vsphere 5.5 and higher. VMware vcenter Server, Inventory, Web Client, SSO, Core Dump Collector, and Update Manager run as separate Windows services. Each service is configured to use a dedicated service account, depending on the security and directory services requirements. VCE supported features VCE supports the following VMware vcenter Server features: VMware Single Sign-On (SSO) Service (version 5.5 and higher) 27

Vblock 240 Architecture Overview Virtualization VMware vsphere Web Client (used with VCE Vision Intelligent Operations ) VMware vcenter vsphere Distributed Switch (VDS) VMware vsphere High Availability (when used with shared storage) DRS (available if VMware vsphere Enterprise Plus is licensed on all ESXi hosts inside a cluster) Fault tolerance VMware vmotion and VMware Storage vmotion Raw device mappings Resource pools Storage DRS (capacity only) (available if VMware vsphere Enterprise Plus is licensed on all ESXi hosts inside a cluster) Storage-driven profiles (user-defined only) (available if VMware vsphere Enterprise Plus is licensed on all ESXi hosts inside a cluster) Distributed power management (up to 50 percent of VMware vsphere ESXi hosts/servers) VMware vcenter Web services VMware Syslog service VMware Core Dump Collector 28

Management components Vblock 240 Architecture Overview Management components Management components overview Advanced Management Platform (AMP-2) is a management system that includes the hardware and software to run Core Management and VCE Optional workloads. The Core Management Workload is the minimum set of software required to install, operate, and support a Vblock System, including hypervisor management, element managers, virtual networking components (Cisco Nexus 1000V switch or the Virtual Distributed Switch (VDS)), and VCE Vision Intelligent Operations Software. AMP-2 provides a single management point for Vblock Systems and provides the ability to: Run the core management and optional workloads Monitor and manage Vblock System health, performance, and capacity Provide network and fault isolation for management Eliminate resource overhead on Vblock Systems Management hardware components The second generation of the Advanced Management Platform (AMP-2) is available in three options: AMP-2 option Physical server Description AMP-2V Two Cisco UCS 220 servers with shared storage (runs within a vapp) AMP-2V, available by default, provides a highly available management platform that resides in the vapp. AMP-2V uses nondedicated Cisco UCS 220 servers to run management workload applications that are shared with other workloads on the Vblock System. AMP-2P AMP-2LP One Cisco UCS C220 server One Cisco UCS C220 server AMP-2P is an optional configuration for the Vblock System. AMP-2P uses a dedicated Cisco UCS C220 server to run management workloads applications. AMP-2P uses its own resources, not customer resources. AMP-2LP is an optional configuration for the Vblock System. AMP-2LP uses a dedicated Cisco UCS C220 server to run a subset of the management workload applications using its own resources, not customer resources, when no VMware virtualization is available. AMP-2LP is available on Vblock Systems that are not running VMware virtualization. Related information Base configurations and scaling (see page 10) 29

Vblock 240 Architecture Overview Management components Management software components AMP-2 is delivered pre-configured with the following software tools: Microsoft Windows Server 2008 Standard R2 SP1 x64 (Eight licenses for Cisco C220 A server and two licenses for the Cisco C220 B Server) Note: AMP-2V includes the licenses in the vapp. VMware vsphere Server Enterprise Plus (VMware vsphere 5.5 and higher) Note: AMP-2V uses existing licenses for the production workload of the Vblock System. VMware vsphere Hypervisor ESXi, VMware Single Sign-On (SSO) Service (version 5.5 and higher), VMware vsphere Web Client, VMware VMware vcenter Server, VMware vcenter Database using Microsoft SQL Server 2012 Standard for vsphere 5.5, VMware vcenter Update Manager, VMware vsphere client Virtual networking component (Cisco Nexus 1000V switch or VMware Virtual Distributed Switch (VDS)) Note: Vblock Systems using VMware VDS will not include Cisco Nexus 1000V VSM VMs and vice versa. EMC PowerPath/VE License Management Server EMC Secure Remote Support (ESRS) Array management modules, including but not limited to, EMC Unisphere Client and Server, EMC Unisphere Service Manager, EMC VNX Initialization Utility, EMC VNX Startup Tool Cisco Device Manager and Cisco Data Center Network Manager (Optional) EMC RecoverPoint management software that includes EMC RecoverPoint Management Application and EMC RecoverPoint Deployment Manager (Optional) Cisco Secure Access Control Server (ACS) 30

Management network connectivity Vblock 240 Architecture Overview Management network connectivity Vblock System 240 AMP-2V network connectivity AMP-2V is the virtual version of the second generation Advanced Management Platform. It runs Core Management and VCE Optional Workloads as virtual machines on customer resources. Vblock System 240 AMP-2V connectivity rules AMP-2V uses the first two Vblock System compute servers and leverages two 1 Gb ports on the Vblock System Management Switch (shared Cisco Integrated Management Controller (CIMC)) and one 10 Gb port on each Vblock System switch (Cisco Nexus 5548UP Switch). Connect uplinks exist from each Vblock System switch (Cisco Nexus 5548UP Switch) into the customer network for VLAN 105 only. Connect uplinks exist from the management switch into the customer network for VLAN 101 only. All networks, except VLAN 101, are owned by the Vblock System switches (Cisco Nexus 5548UP Switches). The VMware vmkernel default gateway is on VLAN 105. 31

Vblock 240 Architecture Overview Management network connectivity The following illustration provides an overview of the network connectivity for the Vblock 240 AMP-2V: Note: This illustration reflects the connections between the devices, not the quantity of those connections. The following illustration provides an overview of the VM server assignment. 32

Management network connectivity Vblock 240 Architecture Overview Note: VMware vsphere 5.5 integrates the Single Sign-on (SSO) Service with the VMware vcenter VM. Note: Vblock Systems using VMware Virtual Distributed Switch (VDS) do not include Cisco Nexus1000V VSM VMs. Vblock System 240 AMP-2P network connectivity AMP-2P is the minimum physical version of the second generation Advanced Management Platform. It provides a single dedicated server running Core Management and VCE Optional Management Workloads as virtual machines using its own resources, not customer resources. Vblock System 240 AMP-2P connectivity rules AMP-2P has a single dedicated server that leverages three 1 Gb ports on the Vblock System management switch (dedicated Cisco Integrated Management Controller (CIMC)) and one 10 Gb port on each Vblock System switch (Cisco Nexus 5548UP Switch). Connect uplinks exist from each Vblock System switch (Cisco Nexus 5548UP Switch) into the customer network for VLAN 105 only. Connect uplinks exist from the management switch into the customer network for VLAN 101 only. 33

Vblock 240 Architecture Overview Management network connectivity All networks, except VLAN 101, are owned by the Vblock System switches (Cisco Nexus 5548UP Switches). The VMware vmkernel default gateway is on VLAN 105. Vblock System 240 AMP-2P network connectivity The following illustration shows the Vblock 240 AMP-2P management connectivity: Note: This illustration reflects the connections between the devices, not the quantity of these connections. 34

Management network connectivity Vblock 240 Architecture Overview Vblock System 240 AMP-2P virtual machine server assignment Note: Vblock Systems using VMware Virtual Distributed Switch (VDS) do not include Cisco Nexus1000V VSM VMs. Vblock System 240 AMP-2LP network connectivity AMP-2LP is the limited physical version of the second generation Advanced Management Platform. It uses a single dedicated server with a limited hardware configuration to run a subset of the Core Management Workload as virtual machines using its own resources. Vblock System 240 AMP-2LP connectivity rules AMP-2LP has a single dedicated server and leverages 3x 1 Gb ports on the Vblock System Management Switch (Dedicated Cisco Integrated Management Controller (CIMC)). All virtual machines on AMP-2LP are on VLAN 101. Connect uplinks exist from the management switch into the customer network for VLAN 101 only. 35

Vblock 240 Architecture Overview Management network connectivity Vblock System 240 AMP-2LP network connectivity Note: This illustration reflects the connections between the devices, not the quantity of these connections. 36

Management network connectivity Vblock 240 Architecture Overview Vblock System 240 AMP-2LP virtual machine server assignment 37

Vblock 240 Architecture Overview Vblock System 240 configurations Vblock System 240 configurations Cabinets The Vblock System 240 uses standard, predefined cabinet layouts. The compute, storage, and network layer components of a Vblock 240 are contained in a 42U cabinet that conforms to a standard, predefined layout. Space is reserved for specific components even if they are not present or required for the customer configuration. This design makes it easier to upgrade or expand each Vblock System as capacity needs increase. Intelligent Physical Infrastructure appliance The Intelligent Physical Infrastructure (IPI) appliance provides an intelligent gateway to gather information about power, thermals, security, alerts, and all components in the physical infrastructure for each cabinet. For more information about the IPI appliance, refer to the administration guide for your VCE System. Sample Vblock System 240 configurations Vblock System 240 systems are available in many configurations. This topic shows a sample minimum and a sample maximum configuration. 38

Vblock System 240 configurations Vblock 240 Architecture Overview Sample minimum configuration The following illustration shows a sample Vblock 240 minimum configuration: This sample minimum configuration shows the base components and reserved space for additional upgrades: One Cisco Nexus 3048 Switch Two Cisco Nexus 5548UP Switches One EMC VNX5200 storage array with FC block. Storage capacity is expandable up to five DAEs/105 drives (including the DPE), depending upon drive type mixture Four Cisco UCS C220 Servers 39

Vblock 240 Architecture Overview Vblock System 240 configurations One Cisco UCS C220 Server for AMP-2 Sample maximum configuration The following illustration shows a sample maximum configuration: This sample maximum configuration shows: One Cisco Nexus 3048 Switch Two Cisco Nexus 5548UP Switches One EMC VNX5200 with unified capability and capacity for 105 drives Two 15 drive NL-SAS DAEs 40

Vblock System 240 configurations Vblock 240 Architecture Overview Two 25 drive SAS/EFD DAEs Twelve Cisco UCS C220 Servers One Cisco UCS C220 Server for AMP-2 Expanding the storage capacity The base configuration has a 25 drive DPE, 8 drive vault, 2 drive FAST Cache, and one T1 hot spare. The storage capacity is expanded from the minimum configuration to the maximum configuration, incrementally, by adding: Two additional 25 drive DAEs for EFD or SAS drives (can be added individually) Two 15 drive DAEs for NL-SAS drives (can be added individually) RAID packs Expanding the storage protocol The base configuration is block only. Storage protocols can be expanded to support unified storage by adding a maximum of two X-Blades for unified storage and two control stations for management. Related information Scaling up storage resources (see page 19) Vblock System 240 power options The Vblock System 240 supports different Power Outlet Unit (POU) options. There are four POUs per cabinet and they are energized in pairs as needed. The following table lists the available power options: Location Customer input feed Plug/receptacle North America 30 A, 208 V, single phase NEMA L6-30P Outside of North America (EMEA) 32 A, 230 V, single phase IEC 60309-2P+E pin in sleeve connector Japan 30 A, 208 V, single phase JIS C8303 L6-30P The VCE Vblock System 240 Physical Planning Guide provides more information about power requirements. 41

Vblock 240 Architecture Overview Vblock System 240 configurations Related information Accessing VCE documentation (see page 6) 42

Additional references Vblock 240 Architecture Overview Additional references References overview The following sections list documentation provided by Cisco, EMC, and VMware for each of the product lines that are discussed in this document. Virtualization components Product Description Link to documentation VMware vcenter Server VMware Single Sign-On (SSO) Service EMC PowerPath/VE Provides a scalable and extensible platform that forms the foundation for virtualization management. Provides VMware-specific authentication services. Provides automated data path management and load-balancing capabilities for server, network, and storage deployed in virtual and physical environments. www.vmware.com/solutions/ virtualization-management/ blogs.vmware.com/kb/2012/10/ vsphere-sso-resources.html www.emc.com/storage/powerpath/ powerpath.htm Compute components Product Description Link to documentation Cisco UCS C220 server Cisco UCS Virtual Interface Card (VIC) 1225 High-density, rack-mount server for production-level network infrastructure, web services, and maintenance data center, branch, and remote office applications. FCoE PCIe adapter used with Cisco UCS C220 servers. Used for converged networking. www.cisco.com/en/us/products/ps12369/ index.html www.cisco.com/en/us/prod/collateral/ modules/ps10277/ps12571/ data_sheet_c78-708295.html Network components Product Description Link to documentation Cisco Nexus 1000V Series Switches A software switch on a server that delivers Cisco VN-Link services to virtual machines hosted on that server. www.cisco.com/en/us/products/ ps9902/index.html 43

Vblock 240 Architecture Overview Additional references Product Description Link to documentation VMware Virtual Distributed Switch (VDS) A VMware vcenter-managed software switch that delivers advanced network services to virtual machines hosted on that server. Cisco Nexus 3048 Switch Provides wire speed layer 2 and 3 switching for data center top of rack deployments. These switches deliver flexible port densities, low power, and programmability on a data-center-class, Cisco Nexus operating system (Cisco NX-OS). www.cisco.com/en/us/products/ ps11541/index.html Cisco Nexus 5548UP Switch Simplifies data center transformation by enabling a standards-based, high-performance unified fabric. www.cisco.com/en/us/products/ ps11681/index.html Storage components Product Description Link to documentation EMC VNX5200 High-performing unified storage with unsurpassed simplicity and efficiency, optimized for virtual applications. http://www.vmware.com/ products/vsphere/featuresdistributed-switch www.emc.com/products/series/vnxseries.htm 44

www.vce.com About VCE VCE, an EMC Federation Company, is the world market leader in converged infrastructure and converged solutions. VCE accelerates the adoption of converged infrastructure and cloud-based computing models that reduce IT costs while improving time to market. VCE delivers the industry's only fully integrated and virtualized cloud infrastructure systems, allowing customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. VCE solutions are available through an extensive partner network, and cover horizontal applications, vertical industry offerings, and application development environments, allowing customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. For more information, go to http://www.vce.com. Copyright All rights reserved. VCE, VCE Vision, VCE Vscale, Vblock, VxBlock, VxRack, and the VCE logo are registered trademarks or trademarks of VCE Company LLC. All other trademarks used herein are the property of their respective owners. 45