VCE Vblock System 340 Gen 3.2 Architecture Overview

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "www.vce.com VCE Vblock System 340 Gen 3.2 Architecture Overview"

Transcription

1 VCE Vblock System 340 Gen 3.2 Architecture Overview Document revision 3.7 February 2015

2 Vblock 340 Gen 3.2 Architecture Overview Contents Contents Revision history...4 Introduction...5 Accessing VCE documentation...6 System overview...7 System architecture and components... 7 Base configurations and scaling...10 Connectivity overview...11 Segregated network architecture...12 Unified network architecture Compute layer overview...19 Compute overview...19 Cisco Unified Computing System...19 Cisco Unified Computing System fabric interconnects...20 Cisco Trusted Platform Module Scaling up compute resources VCE bare metal support policy...22 Disjoint layer 2 configuration Storage layer Storage Overview...25 EMC VNX series storage arrays Replication...27 Scaling up storage resources...27 Storage features support...30 Network layer Network overview IP network components...32 Port utilization...33 Cisco Nexus 5548UP Switch - segregated networking...34 Cisco Nexus 5596UP Switch - segregated networking...35 Cisco Nexus 5548UP Switch unified networking Cisco Nexus 5596UP - Unified Networking Cisco Nexus 9396PX Switch - segregated networking...38 Storage switching components Virtualization layer...41 Virtualization overview

3 Contents Vblock 340 Gen 3.2 Architecture Overview VMware vsphere Hypervisor ESXi...41 VMware vcenter Server Management...45 Management components overview...45 Management hardware components...45 Management software components Management network connectivity System infrastructure...51 Vblock System 340 descriptions Cabinets overview Power options...52 Configuration descriptions...54 Vblock System 340 with EMC VNX Vblock System 340 with EMC VNX Vblock System 340 with EMC VNX Vblock System 340 with EMC VNX Vblock System 340 with EMC VNX Sample configurations Sample Vblock System 340 with EMC VNX Sample Vblock System 340 with EMC VNX Sample Vblock System 340 with EMC VNX5800 (ACI ready) Additional references Virtualization components Compute components Network components...87 Storage components

4 Vblock 340 Gen 3.2 Architecture Overview Revision history Revision history Date Vblock System Document revision Description of changes February 2015 Gen Added support for Cisco B200 M4 Blade. December 2014 Gen Added support for AMP-2HA. September 2014 Gen Modified elevations and removed aggregate section. July 2014 Gen Added support for VMware VDS. May 2014 Gen Updated for Cisco Nexus 9396 Switch and 1500 drives for EMC VNX8000 Added support for VMware vsphere 5.5 January 2014 Gen Updated elevations for AMP-2 reference. November 2013 Gen Updated network connectivity management illustration. October 2013 Gen Gen 3.1 release 4

5 Introduction Vblock 340 Gen 3.2 Architecture Overview Introduction This document describes the high-level design of the Vblock System 340. This document also describes the hardware and software components that VCE includes in each Vblock 340. The target audience for this document includes sales engineers, field consultants, advanced services specialists, and customers who want to deploy a virtualized infrastructure by using the Vblock 340. The VCE Glossary provides terms, definitions, and acronyms that are related to Vblock Systems. To suggest documentation changes and provide feedback on this book, send an to Include the name of the topic to which your feedback applies. Related information Accessing VCE documentation (see page 6) 5

6 Vblock 340 Gen 3.2 Architecture Overview Accessing VCE documentation Accessing VCE documentation Select the documentation resource that applies to your role. Role Customer VCE Partner Cisco, EMC, VCE, or VMware employee VCE employee Resource support.vce.com A valid username and password are required. Click VCE Download Center to access the technical documentation. partner.vce.com A valid username and password are required. portal.vce.com sales.vce.com/saleslibrary or vblockproductdocs.ent.vce.com 6

7 System overview Vblock 340 Gen 3.2 Architecture Overview System overview System architecture and components This topic provides an overview of the Vblock System 340 architecture and components. Vblock 340 has a number of features including: Optimized, fast delivery configurations based on the most commonly purchased components Standardized Vblock 340 cabinets with multiple North American and international power solutions Block (SAN) and unified storage options (SAN and NAS) Support for multiple features of the EMC operating environment for EMC VNX arrays Granular, but optimized compute and storage growth by adding predefined kits and packs Second generation of the Advanced Management Platform (AMP-2) for Vblock System management Unified network architecture provides the option to leverage Cisco Nexus switches to support IP and SAN without the use of Cisco MDS switches. Each Vblock 340 contains the following key hardware and software components: Resource Vblock System management Components VCE Vision Intelligent Operations System Library VCE Vision Intelligent Operations Plug-in for vcenter VCE Vision Intelligent Operations Compliance Checker VCE Vision Intelligent Operations API for System Library VCE Vision Intelligent Operations API for Compliance Checker 7

8 Vblock 340 Gen 3.2 Architecture Overview System overview Resource Components Virtualization and management VMware vsphere Server Enterprise Plus VMware vsphere ESXi VMware vcenter Server VMware vsphere Web Client VMware Single Sign-On (SSO) Service (version 5.1 and higher) Cisco UCS C220 Server for AMP-2 EMC PowerPath/VE Cisco UCS Manager EMC Unisphere Manager EMC VNX Local Protection Suite EMC VNX Remote Protection Suite EMC VNX Application Protection Suite EMC VNX Fast Suite EMC VNX Security and Compliance Suite EMC Secure Remote Support (ESRS) EMC PowerPath Electronic License Management Server (ELMS) Cisco Data Center Network Manager for SAN Compute Cisco UCS 5108 Server Chassis Cisco UCS B-Series M3 Blade Servers with Cisco UCS VIC 1240, optional port expander or Cisco UCS VIC 1280 Cisco UCS B-Series M4 Blade Servers with Cisco UCS VIC 1340, optional port expander or Cisco UCS VIC 1380 Cisco UCSB-MLOM-PT-01 - Port Expander for 1240 VIC Cisco UCS 2208XP fabric extenders or Cisco UCS 2204XP fabric extenders Cisco UCS 2208XP Fabric Extenders with FET Optics or Cisco UCS 2204XP Fabric Extenders with FET Optics Cisco UCS 6248UP Fabric Interconnects or Cisco UCS 6296UP Fabric Interconnects Network Cisco Nexus 1000V Series Switches (Optional) VMware vsphere Distributed Switch (VDS) (VMware vsphere version 5.5 and higher) Cisco Nexus 3048 Switches Cisco Nexus 5548UP Switches, Cisco Nexus 5596UP switches, or Cisco Nexus 9396PX Switches (Optional) Cisco MDS 9148 Multilayer Fabric Switch Storage EMC VNX storage array (5400, 5600, 5800, 7600, 8000) running the VNX Operating Environment (Optional) EMC unified storage (NAS) Each Vblock 340 has a different scale point based on compute and storage options. Each Vblock 340 can support block and/or unified storage protocols. 8

9 System overview Vblock 340 Gen 3.2 Architecture Overview The following illustration provides a high level overview of the components in the Vblock 340 architecture: The VCE Release Certification Matrix provides a list of the certified versions of components for Vblock 340. For information about Vblock System management, refer to the VCE Vision Intelligent Operations Technical Overview. The VCE Vblock System Data Protection Guide provides information about available data protection solutions. Related information Accessing VCE documentation (see page 6) EMC VNX series storage arrays (see page 25) 9

10 Vblock 340 Gen 3.2 Architecture Overview System overview Base configurations and scaling Each Vblock System has a base configuration that contains a minimum set of compute and storage components, as well as fixed network resources that are integrated within one or more 19 inch, 42U cabinets. Within the base configuration, the following hardware aspects can be customized: Hardware Compute blades Compute chassis Storage hardware Storage Supported disk drives How it can be customized Cisco UCS B-Series blade types include all supported VCE blade configurations. Cisco UCS Server Chassis SIxteen chassis maximum for Vblock System 340 with EMC VNX8000, Vblock 340 with EMC VNX7600, Vblock 340 with EMC VNX5800 Eight chassis maximum for Vblock 340 with EMC VNX5600 Two chassis maximum for Vblock 340 with EMC VNX5400 Drive flexibility for up to three tiers of storage per pool, drive quantities in each tier, the RAID protection for each pool, and the number of disk array enclosures (DAEs). EMC VNX storage - block only or unified (SAN and NAS) FastCache 100/200GB SLC SSD Tier 0 100/200GB SLC SSD 100/200/400GB emlc SSD Tier 1 300/600GB 15K SAS 600/900GB 10K SAS Tier 2 1/2/3/4 TB 7.2K NL-SAS Supported RAID types Tier 0: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1) Tier 1: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2)*, (14+2)** Tier 2: RAID 1/0 (4+4), RAID 5 (4+1) or (8+1), RAID 6 (6+2), (12 +2)*, (14+2)** *file virtual pool only **block virtual pool only Management hardware options The second generation of the Advanced Management Platform (AMP-2) centralizes management of Vblock System components. AMP-2 offers minimum physical, redundant physical, and highly available models. The standard option for this platform is the minimum physical model. 10

11 System overview Vblock 340 Gen 3.2 Architecture Overview Hardware Data Mover enclosure (DME) packs How it can be customized Available on all Vblock Systems. Additional enclosure packs can be added for additional X-Blades on Vblock 340 with EMC VNX8000, Vblock 340 with EMC VNX7600, and Vblock 340 with EMC VNX5800. Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to the compute and storage arrays in the system. All components have N+N or N+1 redundancy. These resources can be scaled up as necessary to meet increasingly stringent requirements. The maximum supported configuration differs from model to model. To scale up compute resources, add blade packs and chassis activation kits. To scale up storage resources, add RAID packs, DME packs, and DAE packs. Optionally, expansion cabinets with additional resources can be added. Vblock Systems are designed to keep hardware changes to a minimum if the storage protocol is changed after installation (for example, from block storage to unified storage). Cabinet space can be reserved for all components that are needed for each storage configuration (Cisco MDS switches, X-Blades, etc.) ensuring that network and power cabling capacity for these components is in place. Related information EMC VNX series storage arrays (see page 25) Scaling up compute resources (see page 21) Scaling up storage resources (see page 27) Management components overview (see page 45) Replication (see page 27) Connectivity overview This topic describes the components and interconnectivity within the Vblock System. These components and interconnectivity are conceptually subdivided into the following layers: Layer Compute Storage Network Description Contains the components that provide the computing power within a Vblock System. The Cisco UCS blade servers, chassis, and fabric interconnects belong to this layer. Contains the EMC VNX storage component. Contains the components that provide switching between the compute and storage layers within a Vblock System, and between a Vblock System and the network. Cisco MDS switches and the Cisco Nexus switches belong to this layer. 11

12 Vblock 340 Gen 3.2 Architecture Overview System overview All components incorporate redundancy into the design. Segregated network architecture and unified network architecture In the segregated network architecture, LAN and SAN connectivity is segregated into separate switches within a Vblock System. LAN switching uses the Cisco Nexus switches. SAN switching uses the Cisco MDS 9148 Multilayer Fabric Switch. In the unified network architecture, LAN and SAN switching is consolidated onto a single network device (Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches) within the Vblock System. This removes the need for a Cisco MDS SAN switch. In addition, all management interfaces for infrastructure power outlet unit (POU), network, storage, and compute devices are connected to redundant Cisco Nexus 3048 switches. These switches provide connectivity for Advanced Management Platform (AMP-2) and egress points into the management stacks for the Vblock System components. Refer to the VCE Vblock System 340 Port Assignments Reference for information about the assigned use for each port in the Vblock System. Related information Accessing VCE documentation (see page 6) Management components overview (see page 45) Segregated network architecture (see page 12) Unified network architecture (see page 15) Segregated network architecture This topic shows Vblock System 340 segregated network architecture for block, SAN boot, and unified storage. Block storage configuration The following illustration shows a block-only storage configuration for Vblock 340 with the X-Blades absent from the cabinets. However, space can be reserved in the cabinets for these components 12

13 System overview Vblock 340 Gen 3.2 Architecture Overview (including optional EMC RecoverPoint Appliances). This design makes it easier to add the components later if there is an upgrade to unified storage. SAN boot storage configuration In all Vblock 340 configurations, the VMware vsphere ESXi blades boot over the Fibre Channel (FC) SAN. In block-only configurations, block storage devices (boot and data) are presented over FC through the SAN. In a unified storage configuration, the boot devices are presented over FC and data service can be either block devices (SAN) or presented as NFS data stores (NAS). In a file-only configuration, the boot devices are presented over FC and data devices are through NFS shares. Storage can also be presented directly to the VMs as CIFS shares. 13

14 Vblock 340 Gen 3.2 Architecture Overview System overview The following illustration shows the components (highlighted in a red, dotted line) that are leveraged to support SAN booting in Vblock 340: 14

15 System overview Vblock 340 Gen 3.2 Architecture Overview Unified storage configuration In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X- Blades connect to the Cisco Nexus switches within the network layer over 10 GbE, as shown in the following illustration: Related information Connectivity overview (see page 11) Unified network architecture (see page 15) Unified network architecture The topic provides an overview of the block storage, SAN boot storage, and unified storage configurations for the unified network architecture. 15

16 Vblock 340 Gen 3.2 Architecture Overview System overview With unified network architecture, access to both block and file services on the EMC VNX is provided using the Cisco Nexus 5548UP Switch or Cisco Nexus 5596UP Switch. The Cisco Nexus 9396PX Switch is not supported in unified network architecture. Block storage configuration The following illustration shows a block-only storage configuration in Vblock System 340: In this example, there are no X-Blades providing NAS capabilities. However, space can be reserved in the cabinets for these components (and including the optional EMC RecoverPoint Appliance). This design makes it easier to add the components later if there is an upgrade to unified storage. In a unified storage configuration for block and file, the storage processors also connect to X-Blades over Fibre Channel (FC). The X-Blades connect to the Cisco Nexus switches within the network layer over 10 GbE. 16

17 System overview Vblock 340 Gen 3.2 Architecture Overview SAN boot storage configuration In all Vblock 340 configurations, VMware vsphere ESXi blades boot over the FC SAN. In block-only configurations, block storage devices (boot and data) are presented over FC through the Cisco Nexus unified switch. In a unified storage configuration, the boot devices are presented over FC and data devices can be either block devices (SAN) or presented as NFS data stores (NAS). In a file-only configuration, boot devices are presented over FC, and data devices over NFS shares. The remainder of the storage can be presented either as NFS or as VMFS data stores. Storage can also be presented directly to the VMs as CIFS shares. The following illustration shows the components that are leveraged to support SAN booting in Vblock 340: Unified storage configuration In a unified storage configuration, the storage processors also connect to X-Blades over FC. The X- Blades connect to the Cisco Nexus switches within the network layer over 10 GbE. 17

18 Vblock 340 Gen 3.2 Architecture Overview System overview The following illustration shows a unified storage configuration for Vblock 340: Related information Connectivity overview (see page 11) Management components overview (see page 45) Segregated network architecture (see page 12) 18

19 Compute layer Vblock 340 Gen 3.2 Architecture Overview Compute layer Compute overview This topic provides an overview of the compute components for the Vblock System. Cisco UCS B-Series Blades installed in the Cisco UCS chassis provide computing power within a Vblock System. Fabric extenders (FEX) within the Cisco UCS chassis connect to Cisco fabric interconnects over converged Ethernet. Up to eight 10 GbE ports on each Cisco UCS fabric extender connect northbound to the fabric interconnects, regardless of the number of blades in the chassis. These connections carry IP and storage traffic. VCE has reserved some of these ports to connect to upstream access switches within the Vblock System. These connections are formed into a port channel to the Cisco Nexus switch and carry IP traffic destined for the external network 10 GbE links. In a unified storage configuration, this port channel can also carry NAS traffic to the X-Blades within the storage layer. Each fabric interconnect also has multiple ports reserved by VCE for Fibre Channel (FC) ports. These ports connect to Cisco SAN switches. These connections carry FC traffic between the compute layer and the storage layer. In a unified storage configuration, port channels carry IP traffic to the X-Blades for NAS connectivity. For SAN connectivity, SAN port channels carrying FC traffic are configured between the fabric interconnects and upstream Cisco MDS or Cisco Nexus switches. Related information Accessing VCE documentation (see page 6) Cisco Unified Computing System This topic provides an overview of the Cisco Unified Compute System (UCS) data center platform that unites compute, network, and storage access. Optimized for virtualization, the Cisco UCS integrates a low-latency, lossless 10 Gb Ethernet unified network fabric with enterprise-class, x86-based servers (the Cisco B-Series). Vblock Systems powered by Cisco UCS feature: Built-in redundancy for high availability Hot-swappable components for serviceability, upgrade, or expansion Fewer physical components than in a comparable system built piece by piece 19

20 Vblock 340 Gen 3.2 Architecture Overview Compute layer Reduced cabling Improved energy efficiency over traditional blade server chassis The VCE Vblock Systems Blade Pack Reference provides a list of supported Cisco UCS blades. Related information Accessing VCE documentation (see page 6) Cisco Unified Computing System fabric interconnects The Cisco Unified Computing System (UCS) fabric interconnects provide network connectivity and management capabilities to the Cisco UCS blades and chassis. The Cisco UCS fabric interconnects provide the management and communication backbone for the blades and chassis. The Cisco UCS fabric interconnects provide LAN and SAN connectivity for all blades within their domain. Cisco UCS fabric interconnects are used for boot functions and offer line-rate, lowlatency, lossless 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE) functions. The Vblock System 340 uses Cisco UCS 6248UP Fabric Interconnects and Cisco UCS 6296UP Fabric Interconnects. Single domain uplinks of 2, 4, or 8 between the fabric interconnects and the chassis are provided with the Cisco UCS 6248UP Fabric Interconnects. Single domain uplinks of 4 or 8 between the fabric interconnects and the chassis are provided with the Cisco UCS 6296UP Fabric Interconnects. Cisco Trusted Platform Module Cisco Trusted Platform Module (TPM) is a computer chip that securely stores artifacts such as passwords, certificates, or encryption keys that authenticate the Vblock System. Cisco TPM provides authentication and attestation services that provide safer computing in all environments. Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryption keys that authenticate the Vblock System. Cisco TPM provides authentication and attestation services that provide safer computing in all environments. Cisco TPM is available by default within the Vblock System as a component within the Cisco UCS B- Series M3 Blade Servers and Cisco UCS B-Series M4 Blade Servers, and is shipped disabled. The VCE Vblock Systems Blade Pack Reference contains additional information about Cisco TPM. VCE only supports Cisco TPM hardware. VCE does not support the Cisco TPM functionality. Since making effective use of the Cisco TPM involves the use of a software stack from a vendor with significant experience in trusted computing, VCE defers to the software stack vendor for configuration and operational considerations relating to the Cisco TPMs. 20

21 Compute layer Vblock 340 Gen 3.2 Architecture Overview Related information Scaling up compute resources This topic describes what you can add to your Vblock System 340 to scale up compute resources. To scale up compute resources, you can add uplinks, blade packs, and chassis activation kits to enhance Ethernet and Fibre Channel (FC) bandwidth either when Vblock Systems are built, or after they are deployed. The following table shows the maximum chassis and blade quantities that are supported for the Vblock 340 with EMC VNX5400, Vblock 340 with EMC VNX5600, Vblock 340 with EMC VNX5800, Vblock 340 with EMC VNX7600, and Vblock 340 with EMC VNX8000: Model 2-link Cisco UCS 6248UP Cisco UCS 2204XP IOM 4-link Cisco UCS 6248UP Cisco UCS 2204XP IOM 4-link Cisco UCS 6296UP Cisco UCS 2204XP IOM 8-link Cisco UCS 6248UP Cisco UCS 2208XP IOM 8-link Cisco UCS 6296UP Cisco UCS 2208XP IOM Vblock 340 (8000) Vblock 340 (7600) Vblock 340 (5800) Vblock 340 (5600) Vblock 340 (5400) 16(128) 8(64) 16(128) 4(32) 8(64) 16(128) 8(64) 16(128) 4(32) 8(64) 16(128) 8(64) 16(128) 4(32) 8(64) N/A 8(64) 8(64) 4(32) 8(64) N/A 2(16) N/A N/A N/A Ethernet and FC I/O bandwidth enhancement For Vblock 340 (5600), Vblock 340 (5800), Vblock 340 (7600), and Vblock 340 (8000), Ethernet I/O bandwidth enhancement increases the number of Ethernet uplinks from the Cisco UCS 6296UP fabric interconnects to the network layer to reduce oversubscription. Ethernet I/O bandwidth performance can be enhanced by increasing the uplinks between the Cisco UCS 6296UP fabric interconnects and the Cisco Nexus 5548UP Switch for segregated networking, or the Cisco Nexus 5596UP Switch for unified networking. FC I/O bandwidth enhancement increases the number of FC links between the Cisco UCS 6248UP fabric interconnects or Cisco UCS 6296UP fabric interconnects and the SAN switch, and from the SAN switch to the EMC VNX storage array. The FC I/O bandwidth enhancement feature is supported on Vblock 340 (5800), Vblock 340 (7600), and Vblock 340 (8000). 21

22 Vblock 340 Gen 3.2 Architecture Overview Compute layer Blade packs Cisco UCS blades are sold in packs of two and include two identical Cisco UCS blades. The base configuration of each Vblock System includes two blade packs. The maximum number of blade packs depends on the type of Vblock 340. Each blade type must have a minimum of two blade packs as a base configuration and then can be increased in single blade pack increments thereafter. Each blade pack is added along with the following license packs: VMware vsphere ESXi Cisco Nexus 1000V Series Switches (Cisco Nexus 1000V Advanced Edition only) EMC PowerPath/VE Note: License packs for VMware vsphere ESXi, Cisco Nexus 1000V Series Switches, and EMC PowerPath are not available for bare metal blades. The VCE Vblock System Blade Pack Reference provides a list of supported Cisco UCS blades. Chassis activation kits The power supplies and fabric extenders for all chassis are populated and cabled, and all required Twinax cables and transceivers are populated. As more blades are added and additional chassis are required, chassis activation kits (CAK) are automatically added to an order. The kit contains software licenses to enable additional fabric interconnect ports. Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Chassis activation kits can be added up-front to allow for flexibility in the field or to initially spread the blades across a larger number of chassis. Related information Accessing VCE documentation (see page 6) VCE bare metal support policy Since many applications cannot be virtualized due to technical and commercial reasons, Vblock Systems support bare metal deployments, such as non-virtualized operating systems and applications. 22

23 Compute layer Vblock 340 Gen 3.2 Architecture Overview While it is possible for a Vblock System to support these workloads (with caveats noted below), due to the nature of bare metal deployments, VCE is only able to provide reasonable effort" support for systems that comply with the following requirements: The Vblock System contains only VCE published, tested, and validated hardware and software components. The VCE Release Certification Matrix provides a list of the certified versions of components for Vblock Systems. The operating systems used on bare-metal deployments for compute and storage components must comply with the published hardware and software compatibility guides from Cisco and EMC. For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.), those hypervisor technologies are not supported by VCE. VCE Support is only provided on VMware Hypervisors. VCE reasonable effort support includes VCE acceptance of customer calls, a determination of whether the Vblock System is operating correctly, and assistance in problem resolution to the extent possible. VCE is unable to reproduce problems or provide support on the operating systems and applications installed on bare metal deployments. In addition, VCE does not provide updates to or test those operating systems or applications. The OEM support vendor should be contacted directly for issues and patches related to those operating systems and applications. Related information Accessing VCE documentation (see page 6) Disjoint layer 2 configuration Traffic is routed to different networks at the fabric interconnect in the disjoint layer 2 configuration to support two or more discrete Ethernet clouds connected by the Cisco UCS servers. Upstream disjoint layer 2 networks allow two or more Ethernet clouds that never connect to be accessed by servers or VMs located in the same Cisco UCS domain. 23

24 Vblock 340 Gen 3.2 Architecture Overview Compute layer The following illustration provides an implementation of disjoint layer 2 networking into a Cisco UCS domain: Virtual port channels (VPCs) 101 and 102 are production uplinks that connect to the network layer of the Vblock System. Virtual port channels 105 and 106 are external uplinks that connect to other switches. If you use Ethernet performance port channels (PC 103 and 104 by default), port channels 101 through 104 are assigned to the same VLANs. 24

25 Storage layer Vblock 340 Gen 3.2 Architecture Overview Storage layer Storage overview EMC VNX series are fourth-generation storage platforms that deliver industry-leading capabilities. They offer a unique combination of flexible, scalable hardware design and advanced software capabilities that enable them to meet the diverse needs of today s organizations. EMC VNX series platforms support block storage and unified storage. The platforms are optimized for VMware virtualized applications. They feature flash drives for extendable cache and high performance in the virtual storage pools. Automation features include self-optimized storage tiering, and applicationcentric replication. Regardless of the storage protocol implemented at startup (block or unified), Vblock 340 can include cabinet space, cabling, and power to support the hardware for all of these storage protocols. This arrangement makes it easier to move from block storage to unified storage with minimal hardware changes. The Vblock 340 is available with: EMC VNX5400 EMC VNX5600 EMC VNX5800 EMC VNX7600 EMC VNX8000 Note: In all Vblock Systems, all EMC VNX components are installed in VCE cabinets in VCE-specific layout. EMC VNX series storage arrays This topic provides an overview of the EMC VNX series storage arrays. The EMC VNX series storage arrays connect to dual storage processors (SPs) using 6Gb/s four-lane serial attached SCSI (SAS). Each storage processor connects to one side of each two, four, eight, or sixteen (depending on the Vblock 340) redundant pairs of four-lane x 6Gb/s serial attached SCSI (SAS) buses, providing continuous drive access to hosts in the event of a storage processor or bus fault. Fibre Channel (FC) expansion cards within the storage processors connect to the Cisco MDS switches in the network layer over FC. 25

26 Vblock 340 Gen 3.2 Architecture Overview Storage layer The storage layer in the Vblock System consists of an EMC VNX storage array. Each EMC VNX model contains some or all of the following components: The disk processor enclosure (DPE) houses the service processors for the EMC VNX5400, EMC VNX5600, EMC VNX5800 and EMC VNX7600. The DPE provides slots for two service processors, two battery backup units (BBU), and an integrated 25 slot disk array enclosure (DAE) for 2.5" drives. Each SP provides support for up to 5 SLICs (small I/O cards). The EMC VNX8000 uses a service processor enclosure (SPE) and standby power supplies (SPS). The SPE is a 4U enclosure with slots for two service processors, each supporting up to 11 SLICs. Each EMC VNX8000 includes two 2U SPS' that power the SPE and the vault DAE. Each SPS contains two Li-ION batteries that require special shipping considerations. X-Blades (also known as data movers) provide file-level storage capabilities. These are housed in data mover enclosures (DME). Each X-Blade connects to the network switches using 10G links (either Twinax or 10G fibre). DAEs contain individual disk drives and are available in the following configurations: 2U model that can hold " disks 3U model that can hold " disks EMC VNX5400 The EMC VNX5400 is a DPE-based array with two back-end SAS buses, up to four slots for front-end connectivity, and support for up to 250 drives. It is available in both unified (NAS) and block configurations. EMC VNX5600 The EMC VNX5600 is a DPE-based array with up to six back-end SAS buses, up to five slots for frontend connectivity, and support for up to 500 drives. It is available in both unified (NAS) and block configurations. EMC VNX5800 The EMC VNX5800 is a DPE-based array with up to six back-end SAS buses, up to five slots for frontend connectivity, and support for up to 750 drives. It is available in a block configuration. EMC VNX7600 The EMC VNX7600 is a DPE-based array with six back-end SAS buses, up to four slots for front-end connectivity, and support for up to 1000 drives. It is available in a block configuration. EMC VNX8000 The EMC VNX8000 comes in a different form factor from the other EMC VNX models. The EMC VNX8000 is an SPE-based model with up to 16 back-end SAS buses, up to nine slots for front-end connectivity, and support for up to 1500 drives. It is available in a block configuration. 26

27 Storage layer Vblock 340 Gen 3.2 Architecture Overview Related information Storage features support (see page 30) Replication This section describes how Vblock System 340 can be upgraded to include EMC RecoverPoint. For block storage configurations, Vblock 340 can be upgraded to include EMC RecoverPoint. This replication technology provides continuous data protection and continuous remote replication for ondemand protection and recovery to any point in time. EMC RecoverPoint advanced capabilities include policy-based management, application integration, and bandwidth reduction. RecoverPoint is included in the EMC Local Protection Suite and EMC Remote Protection Suite. To implement EMC RecoverPoint within a Vblock System, add two or more EMC RecoverPoint Appliances (RPA) in a cluster to the Vblock System. This cluster can accommodate approximately 80 MB/s sustained throughput through each EMC RPA. To ensure proper sizing and performance of an EMC RPA solution, VCE works with an EMC Technical Consultant. They collect information about the data to be replicated, as well as data change rates, data growth rates, network speeds, and other information that is needed to ensure that all business requirements are met. Scaling up storage resources This topic describes what you can add to your Vblock System 340 to scale up storage resources. To scale up storage resources, you can expand block I/O bandwidth between the compute and storage resources, add RAID packs, and add disk-array enclosure (DAE) packs. I/O bandwidth and packs can be added when Vblock 340s are built and after they are deployed. I/O bandwidth expansion Fibre channel (FC) bandwidth can be increased in the Vblock 340 with EMC VNX8000, Vblock 340 with EMC VNX7600, and Vblock 340 with EMC VNX5800. This option adds an additional four FC interfaces per fabric between the fabric interconnects and the Cisco MDS 9148 Multilayer Fabric Switch (segregated network architecture) or Cisco Nexus 5548UP Switch or Switch Cisco Nexus 5596UP Switch (unified network architecture). It also adds an additional four FC ports from the EMC VNX to each SAN fabric. This option is available for environments that require high bandwidth, block-only configurations. This configuration requires the use of four storage array ports per storage processor that are normally reserved for unified connectivity of the X-Blades. 27

28 Vblock 340 Gen 3.2 Architecture Overview Storage layer RAID packs Storage capacity can be increased by adding RAID packs. Each pack contains a number of drives of a given type, speed, and capacity. The number of drives in a pack depends upon the RAID level that it supports. The number and types of RAID packs to include in a Vblock System is based upon the following: The number of storage pools that are needed. The storage tiers that each pool contains, and the speed and capacity of the drives in each tier. The following table lists tiers, supported drive types, and supported speeds and capacities. Note: The speed and capacity of all drives within a given tier in a given pool must be the same. Tier Drive type Supported speeds and capacities 1 Solid-state Enterprise Flash drives (EFD) 100 GB SLC EFD 200 GB SLC EFD 100 GB emlc EFD 200 GB emlc EFD 400 GB emlc EFD 2 Serial attached SCSI (SAS) 300 GB 10K RPM 600 GB 10K RPM 900 GB 10K RPM 300 GB 15K RPM 600 GB 15K RPM 3 Nearline SAS 1 TB 7.2K RPM 2 TB 7.2K RPM 3 TB 7.2K RPM The RAID protection level for the tiers in each pool. The following table describes each supported RAID protection level. The RAID protection level for the different pools can vary. RAID protection level Description RAID 1/0 A set of mirrored drives. Offers the best overall performance of the three supported RAID protection levels. Offers robust protection. Can sustain double-drive failures that are not in the same mirror set. Lowest economy of the three supported RAID levels since usable capacity is only 50% of raw capacity. 28

29 Storage layer Vblock 340 Gen 3.2 Architecture Overview RAID protection level Description RAID 5 Block-level striping with a single parity block, where the parity data is distributed across all of the drives in the set. Offers the best mix of performance, protection, and economy. Has a higher write performance penalty than RAID 1/0 because multiple I/Os are required to perform a single write. With single parity, can sustain a single drive failure with no data loss. Vulnerable to data loss or unrecoverable read errors on a track during a drive rebuild. Highest economy of the three supported RAID levels. Usable capacity is 80% of raw capacity or better. RAID 6 Block-level striping with two parity blocks, distributed across all of the drives in the set. Offers increased protection and read performance comparable to RAID 5. Has a significant write performance penalty because multiple I/Os are required to perform a single write. Economy is very good. Usable capacity is 75% of raw capacity or better. EMC best practice for SATA and NL-SAS drives. There are RAID packs for each RAID protection level/tier type combination. The RAID levels dictate the number of drives that are included in the packs. RAID 5 or RAID 1/0 is for performance and extreme performance tiers and RAID 6 is for the capacity tier. The following table lists RAID protection levels and the number of drives in the pack for each level: RAID protection level RAID 1/0 RAID 5 RAID 6 Number of drives per RAID pack 8 (4 data + 4 mirrors) 5 (4 data + 1 parity) or 9 (8 data + 1 parity) 8 (6 data + 2 parity), 14 (12 data + 2 parity)* or 16 (14 data + 2 parity)** * file virtual pool only **block virtual pool only Disk array enclosure packs If the number of RAID packs in a Vblock System is expanded, more disk array enclosures (DAEs) might be required. DAEs are added in packs. The number of DAEs in each pack is equivalent to the number of back-end buses in the EMC VNX array in the Vblock System. The following table lists the number of buses in the array and the number of DAEs in the pack for each Vblock 340: Vblock 340 Number of buses in the array Number of DAEs in the DAE pack EMC VNX or 16 8 or 16 EMC VNX

30 Vblock 340 Gen 3.2 Architecture Overview Storage layer Vblock 340 Number of buses in the array Number of DAEs in the DAE pack EMC VNX EMC VNX or 6 2 or 6 (base includes DPE as the first DAE) EMC VNX (base includes DPE as the first DAE) There are two types of DAEs: a 2U 25 slot DAE for 2.5" disks and 3U 15 slot DAE for 3.5" disks. A DAE pack can contain a mix of DAE sizes, as long as the total DAEs in the pack equals the number of buses. To ensure that the loads are balanced, physical disk will be spread across the DAEs in accordance with best practice guidelines. Storage features support This topic presents additional storage features available on the Vblock System 340. Support for array hardware or capabilities The following table provides an overview of the support provided for EMC VNX operating environment for new array hardware or capabilities: Feature NFS Virtual X-Blades VDM (Multi-LDAP Support) Data-in-place block compression Compression for file/ display compression capacity savings EMC VNX snapshots Description Provides security and segregation for service provider environmental clients. When compression is enabled, thick LUNs are converted to thin and compressed in place. RAID group LUNs are migrated into a pool during compression. There is no need for additional space to start compression. Decompression temporarily requires additional space, since it is a migration, and not an in-place decompression. Available file compression types: Fast compression (default) Deep compression (up to 30% more space efficient, but slower and with higher CPU usage) Displays capacity savings due to compression to allow a cost/benefit comparison (space savings versus performance impact). EMC VNX snapshots are only for storage pools, not for RAID groups. Storage pools can use EMC SnapView snapshots and EMC VNX snapshots at the same time. Note: This feature is optional. Both types of snapshots have a seamless support perspective. VCE relies on guidance from EMC best practices for different use cases of EMC SnapView snapshots versus EMC VNX snapshots. 30

31 Storage layer Vblock 340 Gen 3.2 Architecture Overview Hardware features VCE supports the following hardware features: Dual 10 GE Optical/Active Twinax IP IO/SLIC for X-Blades 2 ½ inch vault drives 2 ½ inch DAEs and drive form factors 3 ½ inch DAEs and drive form factors File deduplication File deduplication is supported, but is not enabled by default. Enabling this feature requires knowledge of capacity and storage requirements. Block compression Block compression is supported but is not enabled by default. Enabling this feature requires knowledge of capacity and storage requirements. External NFS and CIFS access Vblock Systems can present CIFS and NFS shares to external clients provided that these provisions are followed: Vblock System shares cannot be mounted internally by Vblock System hosts and external to the Vblock System at the same time. In a configuration with two X-Blades, mixed internal and external access is supported. In a configuration with more than two X-Blades, external NFS and CIFS access can run on one or more X-Blades that are physically separate from the X-Blades serving VMFS data stores to the Vblock System compute layer. Snapshots EMC VNX snapshots are only for storage pools; not for RAID groups. Storage pools can use EMC SnapView snapshots and EMC VNX snapshots at the same time. Note: EMC VNX snapshot is an optional feature. Both types of snapshots have a seamless support perspective. VCE relies on guidance from EMC best practices for different use cases of EMC SnapView snapshots versus EMC VNX snapshots. Replicas For Vblock System NAS configurations, EMC VNX Replicator is supported. This software can create local clones (full copies) and replicate file systems asynchronously across IP networks. EMC VNX Replicator is included in the EMX VNX Remote Protection Suite. 31

32 Vblock 340 Gen 3.2 Architecture Overview Network layer Network layer Network overview This topic provides an overview of the network components for the Vblock 340. The Cisco Nexus Series Switches in the network layer provide 10 or 40 GbE IP connectivity between the Vblock System and the external network. In unified storage architecture, the switches also connect the fabric interconnects in the compute layer to the X-Blades in the storage layer. In the segregated architecture, the Cisco MDS 9000 series switches in the network layer provide Fibre Channel (FC) links between the Cisco fabric interconnects and the EMC VNX array. These FC connections provide block level devices to blades in the compute layer. In unified network architecture, there are no Cisco MDS series storage switches. FC connectivity is provided by the Cisco Nexus 5548UP Switches or Cisco Nexus 5596UP Switches. Ports are reserved or identified for special Vblock System services such as backup, replication, or aggregation uplink connectivity. The Vblock System contains two Cisco Nexus 3048 switches to provide management network connectivity to the different components of the Vblock System. These connections include the EMC VNX service processors, Cisco UCS fabric interconnects, Cisco Nexus 5500UP switches or Cisco Nexus 9396PX switches, and power output unit (POU) management interfaces. IP network components This topic describes the IP network components used by the Vblock System. The Vblock System uses Cisco UCS 6200 series fabric interconnects. Vblock 340 with EMC VNX5400 uses the Cisco UCS 6248UP fabric switches. All other Vblock Systems use the Cisco UCS 6248UP Fabric Interconnects or the Cisco UCS 6296UP Fabric Interconnects. The Vblock 340 includes two Cisco Nexus 5548UP switches, Cisco Nexus 5596UP switches, or Cisco Nexus 9396PX switches to provide 10 or 40 GbE connectivity: Between the Vblock System internal components To the site network To the second generation Advanced Platform (AMP-2) through redundant connections between AMP-2 and the Cisco Nexus 5548UP switches, Cisco Nexus 5596UP switches, or Cisco Nexus 9396PX switches 32

33 Network layer Vblock 340 Gen 3.2 Architecture Overview To support the Ethernet and SAN requirements in the traditional, segregated network architecture, two Cisco Nexus 5548UP switches or Cisco Nexus 9396PX switches provide Ethernet connectivity, and a pair of Cisco MDS switches provide Fibre Channel (FC) connectivity. The Cisco Nexus 5548UP Switch is available as an option for all segregated network Vblock Systems. It is also an option for unified network Vblock 340 with EMC VNX5400 and EMC VNX5600. Cisco Nexus 5500 series switches The two Cisco Nexus 5500 series switches support low latency, line-rate, 10 Gb Ethernet and FC over Ethernet (FCoE) connectivity for up to 96 ports. Unified port expansion modules are available and provide an extra 16 ports of 10 GbE or FC connectivity. The FC ports are licensed in packs of eight in an ondemand basis. The Cisco Nexus 5548UP switches have 32 integrated, low-latency, unified ports. Each port provides line-rate, 10 Gb Ethernet or eight Gb/s FC connectivity. The Cisco Nexus 5548UP switches have one expansion slot that can be populated with a 16 port unified port expansion module. The Cisco Nexus 5548UP Switch is the only network switch supported for Vblock 340 data connectivity in a Vblock 340 (5400). The Cisco Nexus 5596UP switches have 48 integrated, low-latency, unified ports. Each port provides line-rate 10 GB Ethernet or eight Gbs FC connectivity. The Cisco Nexus 5596UP switches have three expansion slots that can be populated with 16 port, unified, port expansion modules. The Cisco Nexus 5596UP Switch is available as an option for both network topologies for all Vblock Systems except the Vblock 340 (5400). Cisco Nexus 9396PX Switch The Cisco Nexus 9396PX Switch supports both 10 Gbps SFP+ ports and 40 Gbps QSFP+ ports. The Cisco Nexus 9396PX Switch is a two rack unit (2RU) appliance with all ports licensed and available for use. There are no expansion modules available for the Cisco Nexus 9396PX Switch. The Cisco Nexus 9396PX Switch provide 48 integrated, low-latency SFP+ ports. Each port provides linerate 1/10 Gbps Ethernet. There are also 12 QSFP+ ports that provide line-rate 40Gbps Ethernet. Related information Management hardware components (see page 45) Management software components (see page 46) Port utilization This section describes the switch port utilization for Cisco Nexus 5548UP Switch and Cisco Nexus 5596UP Switch in segregated networking and unified networking configurations, as well as the Cisco Nexus switches in a segregated networking configuration. 33

34 Vblock 340 Gen 3.2 Architecture Overview Network layer Cisco Nexus 5548UP Switch - segregated networking This section describes port utilization for a Cisco Nexus 5548UP Switch segregated networking configuration. The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1G or 10G connectivity for LAN traffic. The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module) with segregated networking: Feature Used ports Port speeds Media Uplinks from fabric interconnect (FI) 8* 10G Twinax Uplinks to customer core 8** Up to 10G SFP+ Uplinks to other Cisco Nexus 5000 Series Switches 2 10G Twinax AMP-2 ESX management 3 10G SFP+ *Vblock System 340 with VNX5400 only supports four links between the Cisco UCS FIs and Cisco Nexus 5548UP switches. **Vblock 340 with VNX5400 only supports four links between the Cisco Nexus 5548UP Switch and customer core network. The remaining ports in the base Cisco Nexus 5548UP Switch (no module) provide support for the following additional connectivity option: Feature Available ports Port speeds Media Customer IP backup 3 1G or 10G SFP+ If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additional ports (beyond the core connectivity requirements) available to provide additional feature connectivity. Actual feature availability and port requirements are driven by the model that is selected. The following table shows the additional connectivity for Cisco Nexus 5548UP Switch with a 16UP module: Feature Available ports Port speeds Media Customer IP backup 4 1G or 10G SFP+ Uplinks from Cisco UCS FI for Ethernet bandwidth (BW) enhancement 8 10G Twinax 34

35 Network layer Vblock 340 Gen 3.2 Architecture Overview Cisco Nexus 5596UP Switch - segregated networking This section describes port utilization for a Cisco Nexus 5596UP Switch segregated networking configuration. The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1G or 10G connectivity for LAN traffic. The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module) with segregated networking: Feature Used ports Port speeds Media Uplinks from Cisco UCS FI 8 10G Twinax Uplinks to customer core 8 Up to 10G SFP+ Uplinks to other Cisco Nexus 5000 Series Switches 2 10G Twinax AMP-2 ESX management 3 10G SFP+ The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the following additional connectivity option: Feature Used ports Port speeds Media Customer IP backup 3 1G or 10G SFP+ If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports (beyond the core connectivity requirements) are available to provide additional feature connectivity. Actual feature availability and port requirements are driven by the model that is selected. The following table shows the additional connectivity for the Cisco Nexus 5596UP Switch with one 16UP module: Note: Cisco Nexus 5596UP Switch with two or three 16UP modules is not supported with segregated networking. Feature Available ports Port speeds Media Customer IP backup 4 1G or 10G SFP+ Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10G Twinax Cisco Nexus 5548UP Switch unified networking This section describes port utilization for a Cisco Nexus 5548UP Switch unified networking configuration. The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1G or 10G connectivity for LAN traffic or 2/4/8 Gbps FC traffic. 35

36 Vblock 340 Gen 3.2 Architecture Overview Network layer The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module) with unified networking for the Vblock 340 with VNX5400 only. Feature Used ports Port speeds Media Uplinks from Cisco UCS FI 4 10G Twinax Uplinks to customer core 4 Up to 10G SFP+ Uplinks to other Cisco Nexus 5K 2 10G Twinax AMP-2 ESX management 3 10G SFP+ FC uplinks from Cisco UCS FI 4 8G SFP+ FC links to EMC VNX array 6 8G SFP+ The following table shows the core connectivity for the Cisco Nexus 5548UP Switch with unified networking for Vblock 340 with EMC VNX5600: Feature Used ports Port speeds Media Uplinks from Cisco UCS FI 8 10G Twinax Uplinks to customer core 8 Up to 10G SFP+ Uplinks to other Cisco Nexus 5K 2 10G Twinax AMP-2 ESX management 3 10G SFP+ FC uplinks from UCS FI 4 8G SFP+ FC links to EMC VNX array 6 8G SFP+ The remaining ports in the base Cisco Nexus 5548UP Switch (no module) provide support for the following additional connectivity options for the Vblock 340 with VNX5400 only. Feature Available ports Port speeds Media X-Blade connectivity 2 10G EMC Active Twinax X-Blade NDMP connectivity 2 8G SFP+ Customer IP backup 3 1G or 10G SFP+ The remaining ports in the base Cisco Nexus 5548UP Switch provide support for the following additional connectivity options for the other Vblock Systems: Feature Available ports Port speeds Media EMC RecoverPoint WAN links (one per EMC RecoverPoint Appliance pair) 2 1G GE_T SFP+ X-Blade connectivity 2 10G EMC Active Twinax Customer IP backup 2 1G or 10G SFP+ 36

37 Network layer Vblock 340 Gen 3.2 Architecture Overview If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, additional ports (beyond the core connectivity requirements) available to provide additional feature connectivity. Actual feature availability and port requirements are driven by the model that is selected. The following table shows the additional connectivity for the Cisco Nexus 5548UP Switch with one 16UP module: Feature Available ports Port speeds Media EMC RecoverPoint WAN links (one per EMC RecoverPoint Appliance pair) 4 1G GE_T SFP+ X-Blade connectivity 8 10G EMC Active Twinax Customer IP backup 4 1G or 10G SFP+ Uplinks from Cisco UCS FIs for Ethernet BW Enhancement 8 10G Twinax Cisco Nexus 5596UP Switch - unified networking This section describes port utilization for a Cisco Nexus 5596UP Switch unified networking configuration. The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1/10G connectivity for LAN traffic or 2/4/8 Gbps Fibre Channel (FC) traffic. The following table shows the core connectivity for the Cisco Nexus 5596UP Switch (no module): Feature Used ports Port speeds Media Uplinks from Cisco UCS FI 8 10G Twinax Uplinks to customer core 8 Up to 10G SFP+ Uplinks to other Cisco Nexus 5K 2 10G Twinax AMP-2 ESX management 3 10G SFP+ FC uplinks from Cisco UCS FI 4 8G SFP+ FC links to EMC VNX Array 6 8G SFP+ The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the following additional connectivity options: Feature Minimum ports required for feature Port speeds Media X-Blade connectivity 4 10G EMC Active Twinax X-Blade NDMP connectivity 2 8G SFP+ IP backup solutions 4 1 or 10G SFP+ 37

38 Vblock 340 Gen 3.2 Architecture Overview Network layer Feature Minimum ports required for feature Port speeds Media EMC RecoverPoint WAN links (one per EMC RecoverPoint Appliance pair) EMC RecoverPoint SAN links (two per EMC RecoverPoint Appliance) 2 1G GE_T SFP+ 4 8G SFP+ Up to three additional 16 unified port modules can be added to the Cisco Nexus 5596UP Switch (depending on the selected Vblock 340). Each module has 16 ports to enable additional feature connectivity. Actual feature availability and port requirements are driven by the model that is selected. The following table shows the connectivity options for Cisco Nexus 5596UP Switch for slots 2-4: Feature Ports available for feature Port speeds Media Default module Uplinks from Cisco UCS FI for Ethernet BW enhancement 8 10G Twinax 1 EMC VPLEX SAN connections (4 per engine) 8 8G SFP+ 1 X-Blade connectivity 12 10G EMC Active Twinax 3 X-Blade NDMP connectivity 6 8G SFP+ 3,4 EMC RecoverPoint WAN links (1 per EMC RecoverPoint Appliance pair) EMC RecoverPoint SAN links (2 per EMC RecoverPoint Appliance) FC links from Cisco UCS fabric interconnect for FC BW enhancement FC links from EMC VNX array for FC BW enhancement 2 1G GE_T SFP G SFP G SFP G SFP+ 4 Cisco Nexus 9396PX Switch - segregated networking This section describes port utilization for a Cisco Nexus 9396PX Switch segregated networking configuration. The base Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1G or 10G connectivity and 12 40G QSFP+ ports for LAN traffic. 38

39 Network layer Vblock 340 Gen 3.2 Architecture Overview The following table shows core connectivity for the Cisco Nexus 9396PX Switch with segregated networking: Feature Used ports Port speeds Media Uplinks from fabric interconnect (FI) Uplinks to customer core*** 8* 10G Twinax 8(10G)**/2(40G) Up to 40G SFP+/QSFP+ VPC peer links 2 40G Twinax AMP-2 ESX management 3 10G SFP+ *Vblock 340 with VNX5400 only supports four links between the Cisco UCS FIs and Cisco Nexus 9396PX switches. ** Vblock 340 with VNX5400 only supports four links between the Cisco Nexus 9396PX Switch and customer core network. *** Vblock 340 and Nexus 9396PX supports 40G or 10G SFP+ uplinks to customer core. The remaining ports in the Cisco Nexus 9396PX Switch provide support for a combination of the following additional connectivity options: Feature Available ports Port Speeds Media EMC RecoverPoint WAN links (one per EMC RecoverPoint Appliance pair) 4 1G GE T SFP+ Customer IP backup 8 1G or 10G SFP+ X-Blade connectivity 8 10G EMC Active Twinax Uplinks from Cisco UCS FIs for Ethernet BW enhancement* 8 10G Twinax *Not supported with Vblock 340 with VNX 5400 Storage switching components This section describes how each Vblock System 340 includes redundant Cisco SAN fabric switches. In a segregated networking model, there are two Cisco MDS 9148 multilayer fabric switches. In a unified networking model, Fibre Channel (FC) based features are provided by the two Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches that are also used for LAN traffic. 39

40 Vblock 340 Gen 3.2 Architecture Overview Network layer In the Vblock System, these switches provide: FC connectivity between the compute layer components and the storage layer components Connectivity for backup, business continuity (EMC RecoverPoint Appliance), and storage federation requirements when configured. Note: Inter-Switch Links (ISLs) to the existing SAN are not permitted. The Cisco MDS 9148 Multilayer Fabric Switch provides from 16 to 48 line-rate ports for non-blocking 8 Gbps throughput. The port groups are enabled on an as needed basis. The Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches provide a number of line-rate ports for non-blocking 8 Gbps throughput. Expansion modules can be added to the Cisco Nexus 5596UP Switch that provide 16 additional ports operating at line-rate. The following tables define the port utilization for the SAN components when using a Cisco MDS 9148: Feature Used ports Port speeds Media FC uplinks from Cisco UCS FI 4 8G SFP+ FC links to EMC VNX array 6 8G SFP+ Feature Available ports Backup 2 FC links from Cisco UCS fabric interconnect (FI) for FC Bandwidth (BW) enhancement 4 FC links from EMC VNX array for FC BW enhancement 4 FC links to EMC VNX array dedicated for replication 2 EMC RecoverPoint SAN links (two per EMC RecoverPoint Appliance) 8 SAN aggregation 2 EMC VPLEX SAN connections (four per engine) 8 X-Blade network data management protocol (NDMP) connectivity 2 40

41 Virtualization layer Vblock 340 Gen 3.2 Architecture Overview Virtualization layer Virtualization components This topic provides an overview of the virtualization components. VMware vsphere is the virtualization platform that provides the foundation for the private cloud. The core VMware vsphere components are the VMware vsphere Hypervisor ESXi, VMware vcenter Server for management. VMware vsphere 5.1 or higher includes a Single Sign-on (SSO) component. The hypervisors are deployed in a cluster configuration and can scale up to 32 nodes per cluster. The cluster allows dynamic allocation of resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility with the use of VMware vmotion and Storage vmotion technology. VMware vsphere Hypervisor ESXi This topic describes the VMware vsphere Hypervisor ESXi that runs on the second generation of the Advanced Management Platform (AMP-2) and in a Vblock System utilizing VMware vsphere Server Enterprise Plus. This lightweight hypervisor requires very little space to run (less than six GB of storage required to install) and has minimal management overhead. VMware vsphere ESXi does not contain a console operating system. The VMware vsphere Hypervisor ESXi boots from Cisco FlexFlash (SD card) on AMP-2. For the compute blades, ESXi boots from the SAN through an independent Fibre Channel (FC) LUN presented from the EMC VNX storage array. The FC LUN also contains the hypervisor's locker for persistent storage of logs and other diagnostic files to provide stateless computing within the Vblock System. The stateless hypervisor is not supported. Cluster configuration VMware vsphere ESXi hosts and their resources are pooled together into clusters. These clusters contain the CPU, memory, network, and storage resources available for allocation to VMs. Clusters can scale up to a maximum of 32 hosts. Clusters can support thousands of VMs. The clusters can also support a variety of Cisco UCS blades running inside the same cluster. Note: Some advanced CPU functionality might be unavailable if more than one blade model is running in a given cluster. Data stores Vblock Systems support a mixture of data store types: block level storage using VMFS or file level storage using NFS. 41

42 Vblock 340 Gen 3.2 Architecture Overview Virtualization layer The maximum size per VMFS5 volume is 64 TB (50 TB 1 MB). Beginning with VMware vsphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support a maximum of 255 volumes. VCE optimizes the advanced settings for VMware vsphere ESXi hosts that are deployed in Vblock Systems to maximize the throughput and scalability of NFS data stores. Vblock Systems currently support a maximum of 256 NFS data stores per host. Virtual networks Virtual networking in the Advanced Management Platform (AMP-2) uses standard virtual switches. Virtual networking in Vblock Systems is managed by the Cisco Nexus 1000V Series Switch. The Cisco Nexus 1000V Series Switch ensures consistent, policy-based network capabilities to all servers in the data center by allowing policies to move with a VM during live migration. This provides persistent network, security, and storage compliance. Alternatively, virtual networking in a Vblock 340 is managed by a VMware vcenter Virtual Distributed Switch (version 5.5 or higher) with comparable features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both a VMware Standard Switch (VSS) and a VMware vsphere Distributed Switch (VDS) and will use a minimum of four uplinks presented to the hypervisor. The Cisco Nexus 1000V Series Switch and VMware VDS use intelligent network Class of Service (CoS) marking and Quality of Service (QoS) policies to appropriately shape network traffic according to workload type and priority. Related information Management hardware components (see page 45) Management software components (see page 46) VMware vcenter Server This topic describes the VMware vcenter Server which is a central management point for the hypervisors and VMs. VMware vcenter is installed on a 64-bit Windows Server and runs VMware Update Manager as a service to assist with host patch management. Second generation of the Advanced Management Platform with redundant physical servers (AMP-2RP) and the Vblock System each have a unified VMware vcenter Server Appliance instance. Each of these systems resides in the AMP-2RP. VMware vcenter Server provides the following functionality: Cloning of VMs Creating templates 42

43 Virtualization layer Vblock 340 Gen 3.2 Architecture Overview VMware vmotion and VMware Storage vmotion VMware vcenter Server provides monitoring and alerting capabilities for hosts and VMs. Vblock System administrators can create and apply the following alarms to all managed objects in VMware vcenter Server: Data center, cluster, and host health, inventory, and performance Data store health and capacity VM usage, performance, and health Virtual network usage and health Databases The backend database that supports VMware vcenter Server and VMware Update Manager (VUM) is remote Microsoft SQL Server 2008 (vsphere 5.1) and Microsoft SQL 2012 (vsphere 5.5). The SQL Server service requires a dedicated service account. Authentication Vblock Systems support the VMware Single Sign-On (SSO) Service capable of the integration of multiple identity sources including Active Directory, Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vsphere 5.1 and higher. VMware vcenter Server, Inventory, Web Client, SSO, Core Dump Collector, and Update Manager run as separate Windows services. Each service can be configured to use a dedicated service account depending on the security and directory services requirements. VCE supported features VCE supports the following VMware vcenter Server features: VMware Single Sign-On (SSO) Service (version 5.1 and higher) VMware vsphere Web Client (used with VCE Vision Intelligent Operations) VMware vsphere Distributed Switch (VDS) VMware vsphere High Availability VMware DRS VMware Fault Tolerance VMware vmotion VMware Storage vmotion Raw Device Mappings 43

44 Vblock 340 Gen 3.2 Architecture Overview Virtualization layer Resource Pools Storage DRS (capacity only) Storage driven profiles (user-defined only) Distributed power management (up to 50 percent of VMware vsphere ESXi hosts/blades) VMware Syslog Service VMware Core Dump Collector VMware vcenter Web Client 44

45 Management Vblock 340 Gen 3.2 Architecture Overview Management Management components overview This topic describes the second generation of the Advanced Management Platform (AMP-2) components. The Core Management Workload is the minimum set of required management software to install, operate, and support a Vblock System. This includes all hypervisor management, element managers, virtual networking components (Cisco Nexus 1000V), and VCE Vision Intelligent Operations Software. AMP-2 provides a single management point for Vblock Systems and provides the ability to: Run the core and VCE Optional Management Workloads Monitor and manage Vblock System health, performance, and capacity Provide network and fault isolation for management Eliminate resource overhead on Vblock Systems The Core Management Workload is the minimum required set of management software to install, operate, and support a Vblock System. This includes all hypervisor management, element managers, virtual networking components (Cisco Nexus 1000v or VMware vsphere Distributed Switch (VDS)), and VCE Vision Intelligent Operations software. The VCE Optional Management Workload is non-core Management Workloads that are directly supported and installed by VCE whose primary purpose is to manage components within a Vblock System. The list would be inclusive of, but not limited to, Data Protection, Security or Storage management tools such as, EMC Unisphere for EMC RecoverPoint or EMC VPLEX, Avamar Administrator, EMC InsightIQ for Isilon, or VMware vcns appliances (vshield Edge/Manager). Related information Connectivity overview (see page 11) Unified network architecture (see page 15) Management hardware components This topic describes the second generation of the Advanced Management Platform (AMP-2) hardware. 45

46 Vblock 340 Gen 3.2 Architecture Overview Management AMP-2 is available with one to three physical servers. All options use their own resources to run workloads without consuming resources: AMP-2 option Physical server Description AMP-2P AMP-2RP AMP-2HA Baseline AMP-2HA Performance One Cisco UCS C220 server Two Cisco UCS C220 servers Two Cisco UCS C220 servers Three Cisco UCS C220 servers Default configuration for Vblock System that uses a dedicated Cisco UCS C220 Server to run management workload applications. Adds a second Cisco UCS C220 Server to support application and hardware redundancy. Implements VMware vsphere HA/DRS with shared storage provided by EMC VNXe3200 storage. Adds a third Cisco UCS C220 Server and additional storage for EMC FAST VP. Management software components This topic describes the software that is delivered pre-configured with the second generation of the Advanced Management Platform (AMP-2). AMP-2 is delivered pre-configured with the following software components which are dependent on the selected VCE Release Certification Matrix: Microsoft Windows Server 2008 R2 SP1 Standard x64 Microsoft Windows Server 2012 R2 Standard x64 VMware vsphere Enterprise Plus VMware vsphere Hypervisor ESXi VMware Single Sign-On (SSO) Service VMware vsphere Web Client Service VMware vsphere Inventory Service VMware vcenter Server VMware vcenter Database using Microsoft SQL Server Standard Edition VMware vcenter Update Manager VMware vsphere client VMware vsphere Syslog Service (optional) VMware vsphere Core Dump Service (optional) 46

47 Management Vblock 340 Gen 3.2 Architecture Overview VMware vcenter Server Appliance (AMP-2RP) - a second instance of VMware vcenter Server is required to manage the replication instance separate from the production VMware vcenter Server VMware vsphere Replication Appliance (AMP-2RP) VMware vsphere Distributed Switch (VDS) or Cisco Nexus 1000V virtual switch (VSM) EMC PowerPath/VE Electronic License Management Server (ELMS) EMC Secure Remote Support (ESRS) Array management modules, including but not limited to, EMC Unisphere Client, EMC Unisphere Service Manager, EMC VNX Initialization Utility, EMC VNX Startup Tool, EMC SMI-S Provider, EMC PowerPath Viewer Cisco Prime Data Center Network Manager and Device Manager (Optional) EMC RecoverPoint management software that includes EMC RecoverPoint Management Application and EMC RecoverPoint Deployment Manager Management network connectivity This topic provides the second generation of the Advanced Management Platform network connectivity and server assignment illustrations. 47

48 Vblock 340 Gen 3.2 Architecture Overview Management AMP-2HA network connectivity The following illustration provides an overview of the network connectivity for the AMP-2HA: 48

49 Management Vblock 340 Gen 3.2 Architecture Overview AMP-2HA server assignments The following illustration provides an overview of the VM server assignment for AMP-2HA: Vblock Systems that use VMware vsphere Distributed Switch (VDS) will not include Cisco Nexus1000V VSM VMs. The Performance option of AMP-2HA leverages the DRS functionality of VMware vcenter to optimize resource usage (CPU/memory) so that VM assignment to a VMware vsphere ESXi host will be managed automatically 49

50 Vblock 340 Gen 3.2 Architecture Overview Management AMP-2P server assignments The following illustration provides an overview of the VM server assignment for AMP-2P: AMP-2RP server assignments The following illustration provides an overview of the VM server assignment for AMP-2RP: Vblock Systems that use VMware VDS will not include Cisco Nexus1000V VSM VMs. 50

51 System infrastructure Vblock 340 Gen 3.2 Architecture Overview System infrastructure Vblock System 340 descriptions This topic provides a comparison of the compute, network, and storage architecture for the Vblock System 340. The following table shows a comparison of the compute architecture: Vblock 340 with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400 Cisco B-series blade chassis 16 maximum 8 maximum 2 maximum B-series blades (maximum) Half-width = 128, Full-width = 64 Half-width = 64 Full-width = 32 Half-width = 16 Full-width = 8 Fabric interconnects Cisco Nexus 6248UP or Cisco Nexus 6296UP Cisco Nexus 6248UP The following table shows a comparison of the network architecture: Vblock 340 with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400 Network Cisco Nexus 5548UP or Cisco Nexus 5596UP Cisco Nexus 5548UP SAN Cisco MDS 9148 (segregated) The following table shows a comparison of the storage architecture: Vblock 340 with: EMC VNX8000 EMC VNX7600 EMC VNX5800 EMC VNX5600 EMC VNX5400 Storage access Block or unified Back-end SAS buses 8 or or 6 2 Storage protocol (block) Storage protocol (file) Data store type (block) Data store type (file) Boot path FC NFS and CIFS VMFS NFS SAN Maximum drives X-Blades (min/max) 2/8 2/4 2/3 2/2 2/2 51

52 Vblock 340 Gen 3.2 Architecture Overview System infrastructure Cabinets overview This topic describes the cabinets for the Vblock 340. In each Vblock System, the compute, storage, and network layer components are distributed within two or more 42U cabinets. Distributing the components this way balances out the power draw and reduces the size of the power outlet units (POUs) that are required. Each cabinet conforms to a standard predefined layout. Space can be reserved for specific components even if they are not present or required for the external configuration. This design makes it easier to upgrade or expand each Vblock System as capacity needs increase. Vblock System cabinets are designed to be installed next to one another within the data center (that is, contiguously). If a customer requires the base and expansion cabinets to be physically separated, customized cabling is needed, which incurs additional cost and can increase delivery time. Note: The cable length is NOT the same as distance between cabinets. The cable must route through the cabinets and through the cable channels overhead or in the floor. Power options This topic describes the power outlet unit (POU) options inside and outside of North America. Vblock System 340 supports several POU options inside and outside of North America. North America power options The NEMA POU is standard; other POUs add time to Vblock System assembly and delivery. The following table lists the POUs available for Vblock Systems in North America: POU NEMA L15-30P IEC EC 309 3P4W SPLASH PROOF 460P9S IEC IEC309 2P3W SPLASH PROOF 360P6S NEMA L6-30P Power specifications 3-phase Delta / 30A / 208V 3-phase Delta / 60A / 208V Single phase / 60A / 208V Single phase / 30A / 208V (half-height) 52

53 System infrastructure Vblock 340 Gen 3.2 Architecture Overview Europe power options The IEC 309 POU is standard; other POUs add time to Vblock System assembly and delivery. The following table lists the POUs available for Vblock Systems in Europe: POU IEC 60309, SPLASH PROOF IEC 60309, SPLASH PROOF IEC 60309, SPLASH PROOF Power specifications 3-phase WYE/ 32A / 230 / 400V 3-phase WYE / 16A / 230 / 400V Single phase / 32A / 230V (half height) Japan power options The following table lists the POUs available for Vblock Systems in Japan: POU JIS C8303 L15-30P IEC SPLASH PROOF IEC SPLASH PROOF JIS C8303 L15-30P Power specifications 3-phase Delta / 30A / 208V 3-phase Delta / 60A / 208V Single phase / 60A / 208V Single phase / 30A / 208V (half-height) The VCE Vblock System 340 Physical Planning Guide provides more information about power requirements. Related information Accessing VCE documentation (see page 6) 53

54 Vblock 340 Gen 3.2 Architecture Overview Configuration descriptions Configuration descriptions Vblock System 340 with EMC VNX8000 This topic provides an overview of the Vblock System 340 with EMC VNX8000 components. Array options Vblock 340 (8000) is available as block only or unified storage. A unified storage Vblock 340 (8000) supports up to eight X-Blades and ships with two X-Blades and two control stations. Each X-Blade provides four 10G front-end network connections. An additional data mover enclosure (DME) supports the connection of two additional X-Blades with the same configuration as the base data movers. The following table shows the available array options: Array Bus Supported X-Blades Block 8/16 N/A Unified 8/16 2 Unified 8/16 3 Unified 8/16 4 Unified 8/16 5 Unified 8/16 6 Unified 8/16 7 Unified 8/16 8 Each X-Blade contains: One 6 core 2.8 GHz Xeon processor 24 GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array Two 2-port 10 GB SFP+ compatible SLICs Feature options The Vblock 340 (8000) supports both Ethernet and FC bandwidth (BW) enhancement. Ethernet BW enhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires that SAN connectivity is provided by Cisco MDS 9148 multilayer fabric switches or Cisco Nexus 5596UP switches, depending on topology. 54

55 Configuration descriptions Vblock 340 Gen 3.2 Architecture Overview The following table shows the feature options: Array Topology FC BW enhancement Ethernet BW enhancement Block Segregated Y Y Unified Segregated Y Y Block Unified network Y Y Unified Unified network Y Y Unified networking is only supported on the Vblock 340 (8000) with Cisco Nexus 5596UP switches. Ethernet BW enhancement is only supported on the Vblock 340 (8000) with Cisco Nexus 5596UP switches. Disk array enclosure configuration Vblock 340 (8000) includes two 25 slot 2.5" disk array enclosures (DAEs). An additional six DAEs are required beyond the two base DAEs. Additional DAEs can be added in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial eight) are added in multiples of eight. If there are 16 buses, then DAEs must be added in multiples of 16. DAEs are interlaced when racked, and all 2.5" DAEs are first racked on the buses, then 3.5" DAEs. SLIC configuration The EMC VNX8000 provides slots for 11 SLICs in each service processor (SP). Two slots in each SP are populated with back-end SAS bus modules by default. Two additional back-end SAS bus modules support up to 16 buses. If this option is chosen, all DAEs are purchased in groups of 16. The Vblock 340 (8000) supports two FC SLICs per SP for host connectivity. Additional FC SLICs are included to support unified storage. If FC BW enhancement is configured, an additional FC SLIC is added to the array. The remaining SLIC slots are reserved for future VCE configuration options. VCE only supports the four port FC SLIC for host connectivity. By default, six FC ports per SP are connected to the SAN switches for Vblock System host connectivity. The addition of FC BW Enhancement provides four additional FC ports per SP. As the EMC VNX 8000 has multiple CPUs, SLIC arrangements should be balanced across CPUs. 55

56 Vblock 340 Gen 3.2 Architecture Overview Configuration descriptions The following table shows the SLIC configurations per SP (eight bus): Array FC BW enhancement SL 0 SL 1 SL 2 SL 3 SL 4 SL 5 SL 6 SL 7 SL 8 SL 9 SL 10 Block Y FC Res Res FC Res Bus Res Res Res FC Bus Unified Y FC Res Res FC Res Bus Res Res FC/U FC Bus Block N FC Res Res Res Res Bus Res Res Res FC Bus Unified N FC Res Res Res Res Bus Res Res FC/U FC Bus Unified -> 4 DM N FC Res FC/U Res Res Bus Res Res FC/U FC Bus Unified -> 4 DM Y FC Res FC/U FC Res Bus Res Res FC/U FC Bus Res: slot reserved for future VCE configuration options. FC: 4xFC port input/output module (IOM): Provides four 8G FC connections. FC/U: 4xFC port IOM dedicated to unified X-Blade connectivity: provides four 8G FC connections. Bus: Four port - 4x lane/port 6 Gb/s SAS: provides additional back-end bus connections. The following table shows the SLIC configurations per SP (16 bus): Array FC BW SL 0 SL 1 SL 2 SL 3 SL 4 SL 5 SL 6 SL 7 SL 8 SL 9 SL 10 Block Y FC Res Res FC Bus Bus Bus Res Res FC Bus Unified Y FC Res Res FC Bus Bus Bus Res FC/U FC Bus Block N FC Res Res Res Bus Bus Bus Res Res FC Bus Unified N FC Res Res Res Bus Bus Bus Res FC/U FC Bus Unified -> 4 DM N FC Res FC/U Res Bus Bus Bus Res FC/U FC Bus Unified -> 4 DM Y FC Res FC/U FC Bus Bus Bus Res FC/U FC Bus N/A: not available for this configuration. Res: slot reserved for future VCE configuration options. FC: 4xFC port IOM: provides four 8G FC connections. FC/U: 4xFC port IOM dedicated to unified X-Blade connectivity: provides four 8G FC connections. Bus: Four port - 4x lane/port 6 Gb/s SAS: provides additional back-end bus connections. Two additional back-end SAS bus modules are available to support up to 16 buses. If this option is chosen, all DAEs are purchased in groups of 16. Compute The Vblock 340 (8000) supports between two to 16 chassis, and up to 128 half-width blades. Each chassis can be connected with two links (Cisco UCS 2204XP fabric extenders IOM only), four links (Cisco UCS 2204XP fabric extenders IOM only), or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM. 56

57 Configuration descriptions Vblock 340 Gen 3.2 Architecture Overview The following table shows the compute options that are available for the fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) 16 (128) 8 (64) 4 (32) Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64) Connectivity The Vblock 340 (8000) supports the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches or Cisco MDS 9148 multilayer fabric switches, based on topology. The following table shows all the available switch combinations that are available for the fabric interconnects: Fabric interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 Multilayer Fabric Switch Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified Cisco Nexus 5596UP switches Cisco UCS 6296UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 Multilayer Fabric Switch Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified Cisco Nexus 5596UP switches Note: The default is unified network with Cisco Nexus 5596UP switches. Vblock System 340 with EMC VNX7600 This topic provides an overview of the Vblock System 340 with EMC VNX7600 components. Array options Vblock 340 (7600) is available as block only or unified storage. A unified storage Vblock 340 (7600) supports up to eight X-Blades and ships with two X-Blades and two control stations. Each X-Blade provides four 10G front-end connections to the network. An additional data mover enclosure (DME) supports the connection of two additional X-Blades with the same configuration as the base X-Blades. 57

58 Vblock 340 Gen 3.2 Architecture Overview Configuration descriptions The following table show the available array options: Array Bus Supported X-Blades Block 6 N/A Unified 6 2 * Unified 6 3 * Unified 6 4 * Unified 6 5* Unified 6 6* Unified 6 7* Unified 6 8* *VCE supports two to eight X-Blades in a Vblock 340 (7600). Each X-Blade contains: One 4 core 2.4 GHz Xeon processor 12 GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array Two 2-port 10 GB SFP+ compatible SLICs Feature options The Vblock 340 (7600) supports both Ethernet and FC bandwidth (BW) enhancement. Ethernet BW enhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires that SAN connectivity is provided by Cisco MDS 9148 multilayer fabric switches or the Cisco Nexus 5596UP switches, depending on topology. Both block and unified arrays use FC BW enhancement. The following table shows the feature options: Array Topology FC BW enhancement Ethernet BW enhancement Block Segregated Y Y Unified Segregated Y Y Block Unified network Y Y Unified Unified network Y Y Unified networking is only supported on the Vblock 340 (7600) with Cisco Nexus 5596UP switches. 58

59 Configuration descriptions Vblock 340 Gen 3.2 Architecture Overview Disk array enclosure configuration Vblock 340 (7600) has two 25 slot 2.5" disk array enclosures (DAEs). The EMC VNX 7600 data processor enclosure (DPE) provides the DAE for bus 0, and provides the first DAE on bus 1. An additional four DAEs are required beyond the two base DAEs. Additional DAEs can be added in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial six) are added in multiples of six. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs. SLIC configuration The EMC VNX7600 provides slots for five SLICs in each service processor (SP). Slot 0 in each SP is populated with a back-end SAS bus module. The Vblock 340 (7600) supports two FC SLICs per SP for host connectivity. A third is reserved to support unified storage. If FC BW enhancement is configured, an additional FC SLIC is added to the array. VCE only supports the four port FC SLIC for host connectivity. By default, six FC ports per SP are connected to the SAN switches for Vblock System host connectivity. The addition of FC BW enhancement provides four additional FC ports per SP. The following table shows the SLIC configurations per SP: Array FC BW enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4 Block Y Bus FC FC FC N/A Unified (<5DM)* Y Bus FC FC FC FC/U Block N Bus FC FC N/A N/A Unified N Bus FC FC FC/U FC/U Greater than four X-Blades prohibits FC BW enhancement feature N/A: not available for this configuration. FC 4xFC port IO module: provides four 8G FC connections. FC/U 4xFC port IO module dedicated to unified X-Blade connectivity: provides four 8G FC connections. Bus four port - 4x lane/port six GB SAS: provides additional back-end bus connections. Compute The Vblock 340 (7600) supports two to 16 chassis, and up to 128 half-width blades. Each chassis can be connected with two links (Cisco UCS 2204XP fabric extenders input/output module (IOM) only), four links (Cisco UCS 2204XP fabric extenders IOM only), or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM. 59

60 Vblock 340 Gen 3.2 Architecture Overview Configuration descriptions The following table shows the compute options available for the fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) 16 (128) 8 (64) 4 (32) Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64) Connectivity The Vblock 340 (7600) supports the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches or Cisco MDS 9148 multilayer fabric switches, based on the topology. The following table shows the available switch combinations available for the fabric interconnects: Fabric interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 Multilayer Fabric Switch Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified Cisco Nexus 5596UP switches Cisco UCS 6296UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 Multilayer Fabric Switch Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified Cisco Nexus 5596UP switches Note: The default is unified network with Cisco Nexus 5596UP switches. Vblock System 340 with EMC VNX5800 This topic provides an overview of the Vblock System 340 with EMC VNX5800 components. Array options Vblock 340 (5800) is available as block only or unified storage. A unified storage Vblock 340 (5800) supports up to six X-Blades and ships with two X-Blades and two control stations. Each X-Blade provides four 10G front-end connections to the network. An additional data mover enclosure (DME) supports the connection of one additional X-Blade with the same configuration as the base data movers. 60

61 Configuration descriptions Vblock 340 Gen 3.2 Architecture Overview The following table shows the available array options: Array Bus Supported X-Blades Block 6 N/A Unified 6 2 Unified 6 3* Unified 6 4* Unified 6 5* Unified 6 6* VCE supports two to six X-Blades in a Vblock 340 (5800). Each X-Blade contains: One 4 core 2.13 GHz Xeon processor 12 GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array Two 2-port 10 GB SFP+ compatible SLICs Feature options The Vblock 340 (5800) supports both Ethernet and FC bandwidth (BW) enhancement. Ethernet BW enhancement is available with Cisco Nexus 5596UP switches only. FC BW enhancement requires that SAN connectivity is provided by Cisco MDS 9148 multilayer fabric switches or the Cisco Nexus 5596UP switches, depending on topology. Both block and unified arrays use FC BW enhancement. The following table shows the feature options: Array Topology FC BW enhancement Ethernet BW enhancement Block Segregated Y Y Unified Segregated Y Y Block Unified network Y Y Unified Unified network Y Y Note: Unified networking is only supported on the Vblock 340 (5800) with Cisco Nexus 5596UP switches. 61

62 Vblock 340 Gen 3.2 Architecture Overview Configuration descriptions Disk array enclosure configuration Vblock 340 (5800) has two 25 slot 2.5" disk array enclosure (DAEs). The EMC VNX5800 data processor enclosure (DPE) provides the DAE for bus 0, and the second provides the first DAE on bus 1. An additional four DAEs are required beyond the base two DAEs. Additional DAEs can be added in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs (after initial six) are added in multiples of six. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs. SLIC configuration The EMC VNX5800 provides slots for five SLICs in each service processor. Slot 0 is populated with a back-end SAS bus module. The Vblock 340 (5800) supports two FC SLICs per SP for host connectivity. A third is reserved to support unified storage. If FC BW enhancement is configured, an additional FC SLIC is added to the array. VCE only supports the four-port FC SLIC for host connectivity. By default, six FC ports per SP are connected to the SAN switches for Vblock System host connectivity. The addition of FC BW enhancement provides four additional FC ports per SP. The following table shows the SLIC configurations per SP: Array FC BW enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4 Block Y Bus FC FC FC N/A Unified (<5DM)* Y Bus FC FC FC FC/U Block N Bus FC FC N/A N/A Unified N Bus FC FC FC/U FC/U Greater than four X-Blades prohibits FC BW enhancement. N/A: not available for this configuration. FC 4xFC port input/output module (IOM): provides four 8G FC connections. FC/U 4xFC port IOM dedicated to unified X-Blade connectivity: provides four 8G FC connections. Bus: Four port - 4x lane/port 6 Gb/s SAS: provides additional back-end bus connections. Compute The Vblock 340 (5800) supports two to 16 chassis, and up to 128 half-width blades. Each chassis can be connected with two links (Cisco UCS 2204XP fabric extenders IOM only), four links (Cisco UCS 2204XP fabric extenders IOM only) or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM. The following table shows the compute options that are available for the fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) 16 (128) 8 (64) 4 (32) 62

63 Configuration descriptions Vblock 340 Gen 3.2 Architecture Overview Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64) Connectivity The Vblock 340 (5800) supports the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 switches or Cisco MDS 9148 multilayer fabric switches, based on topology. The following table shows all the available switch combinations that are available for the fabric interconnects: Fabric interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 Multilayer Fabric Switch Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified Cisco Nexus 5596UP switches Cisco UCS 6296UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 Multilayer Fabric Switch Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified Cisco Nexus 5596UP switches Note: The default is unified network with Cisco Nexus 5596UP switches. Vblock System 340 with EMC VNX5600 This topic provides an overview of the Vblock System 340 with EMC VNX5600 components. Array options Vblock 340 (5600) is available as block only or unified storage. A unified storage Vblock 340 (5600) supports one to four X-Blades and two control stations. Each X-Blade provides two 10G front-end connections to the network. The following table shows the available array options: Array Bus Supported X-Blades Block 2 or 6 N/A 63

64 Vblock 340 Gen 3.2 Architecture Overview Configuration descriptions Array Bus Supported X-Blades Unified 2 or 6 1 Unified 2 or 6 2* Unified 2 or 6 3* Unified 2 or 6 4* *VCE supports one to four X-Blades in a Vblock 340 (5600). Each X-Blade contains: One 4 core 2.13 GHz Xeon processor Six GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array One 2-port 10 GB SFP+ compatible SLICs Feature options The Vblock 340 (5600) uses the Cisco Nexus 5596UP switches. Vblock 340 (5600) does not support FC bandwidth (BW) enhancement in block or unified arrays. The following table shows the feature options: Array Topology Ethernet BW enhancement Block Segregated Y Unified Segregated Y Block Unified network Y Unified Unified network Y DAE configuration Vblock 340 (5600) has two 25 slot 2.5" disk array enclosure (DAEs). The EMCVNX 5600 disk processor enclosure (DPE) provides the DAE for bus 0, the second provides the first DAE on bus 1. Additional DAEs can be in either 15 slot 3.5" DAEs or 25 slot 2.5? DAEs. Additional DAEs are added in multiples of two. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs. An additional four port SAS bus expansion SLIC is an option with the Vblock 340 (5600). If more than 19 DAEs are required, the addition of a four port expansion bus card is required. If the card is added, DAEs are purchased in groups of six. 64

65 Configuration descriptions Vblock 340 Gen 3.2 Architecture Overview SLIC configuration The EMC VNX5600 provides slots for five SLICs in each service processor. The Vblock 340 (5600) has two FC SLICs per SP for host connectivity. A third FC SLIC can be ordered to support unified storage. The remaining SLIC slots are reserved for future VCE configuration options. VCE only supports the four port FC SLIC for host connectivity. Six FC ports per SP are connected to the SAN switches for Vblock System host connectivity. The following table shows the SLIC configurations per SP: Array FC bandwidth enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4 Block N Bus FC FC N/A N/A Unified N Bus FC FC N/A FC/U The FC 4xFC port I/O module provides four 8G FC connections. The FC/U 4xFC port IO module (IOM) dedicated to unified X-Blade connectivity provides four 8G FC connections. Bus four port - 4x lane/port six GB SAS: provides additional back-end bus connections. Compute The Vblock 340 (5600) supports two to eight chassis and up to 64 half-width blades. Each chassis can be connected with four links (Cisco UCS 2204XP fabric extenders IOM only) or eight links (Cisco UCS 2208XP fabric extenders IOM only) per IOM. The following table shows the compute options that are available for the fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) N/A 8 (64) 4 (32) Cisco UCS 6296UP 2 (2) N/A 16 (128) 8 (64) Connectivity The Vblock 340 (5600) supports the Cisco UCS 6248UP fabric interconnects and Cisco UCS 6296UP fabric interconnects. These uplink to the Cisco Nexus 5548UP switches or Cisco Nexus 5596UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5500 Series Switches or Cisco MDS 9148 multilayer fabric switches, based on topology. The following table shows the switch options that are available for the fabric interconnects: Fabric Interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified network Cisco Nexus 5548UP switches 65

66 Vblock 340 Gen 3.2 Architecture Overview Configuration descriptions Fabric Interconnect Topology Ethernet SAN Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified network Cisco Nexus 5596UP switches Cisco UCS 6296UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified network Cisco Nexus 5548UP switches Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified network Cisco Nexus 5596UP switches Note: The default is unified network with Cisco Nexus 5596UP switches. Vblock System 340 with EMC VNX5400 This topic provides an overview of the Vblock System 340 with EMC VNX5400 components. Array options Vblock 340 (5400) is available as block only or unified storage. A unified storage Vblock 340 (5400) supports one to four X-Blades and two control stations. Each X-Blade provides two 10G front-end connections to the network. The following table shows the available array options: Array Bus Supported X-Blades Block 2 N/A Unified 2 1* Unified 2 2* Unified 2 3* Unified 2 4* *VCE supports one to four X-Blades in a Vblock 340 (5400). Each X-Blade contains: One 4 core 2.13 GHz Xeon processor Six GB RAM One Fibre Channel (FC) storage line card (SLIC) for connectivity to array 66

67 Configuration descriptions Vblock 340 Gen 3.2 Architecture Overview One 2-port 10 GB SFP+ compatible SLICs Feature options The Vblock 340 (5400) uses the Cisco UCS 6248UP fabric interconnects. Vblock 340 (5400) does not support for FC bandwidth (BW) enhancement or Ethernet BW enhancement in block or unified arrays. Disk array enclosure configuration Vblock 340 (5400) has two 25 slot 2.5" disk array enclosure (DAEs). The EMC VNX5400 disk processor enclosure (DPE) provides the DAE for bus 0, the second provides the first DAE on bus 1. Additional DAEs can be in either 15 slot 3.5" DAEs or 25 slot 2.5" DAEs. Additional DAEs are added in multiples of two. DAEs are interlaced when racked, and all 2.5" DAEs are racked first on the buses, then 3.5" DAEs. SLIC configuration The EMC VNX5400 provides slots for five SLICs in each service processor (SP), although only four are enabled. The Vblock 340 (5400) has two FC SLICs per SP for host connectivity. A third FC SLIC can be ordered to support unified storage. The remaining SLIC slots are reserved for future VCE configuration options. VCE only supports the four-port FC SLIC for host connectivity. Six FC ports per SP are connected to the SAN switches for Vblock System host connectivity. The following table shows the SLIC configurations per SP: Array FC BW enhancement SLIC 0 SLIC 1 SLIC 2 SLIC 3 SLIC 4 Block N N/A FC FC N/A N/A Unified N N/A FC FC N/A FC/U The FC 4xFC port I/O module (IOM) provides four 8G FC connections. The FC/U 4xFC port IOM dedicated to unified X-Blade connectivity provides four 8G FC connections. Compute The Vblock 340 (5400) is configured with two chassis that support up to 16 half-width blades. Each chassis is connected with four links per fabric extender I/O module (IOM). The Vblock 340 (5400) supports Cisco UCS 2204XP Fabric Extenders IOM only. The following table shows the compute options that are available for the Cisco UCS 6248UP fabric interconnects: Fabric interconnect Min chassis (blades) 2-link max chassis (blades) 4-link max chassis (blades) 8-link max chassis (blades) Cisco UCS 6248UP 2 (2) N/A 2 (16) N/A 67

68 Vblock 340 Gen 3.2 Architecture Overview Configuration descriptions Connectivity The Vblock 340 (5400) contains the Cisco UCS 6248UP fabric interconnects that uplink to Cisco UCS Nexus 5548UP switches for Ethernet connectivity. SAN connectivity is provided by the Cisco Nexus 5548UP switches or Cisco MDS 9148 multilayer fabric switches. The following table shows the switch options that are available for the fabric interconnects: Fabric interconnect Topology Ethernet SAN Cisco UCS 6248UP Segregated Cisco Nexus 5548UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified network Cisco Nexus 5548UP switches Segregated Cisco Nexus 5596UP switches Cisco MDS 9148 Multilayer Fabric Switch Unified network Cisco Nexus 5596UP switches Note: The default is Cisco Nexus 5596UP switches. 68

69 Sample configurations Vblock 340 Gen 3.2 Architecture Overview Sample configurations Sample Vblock System 340 with EMC VNX8000 Vblock 340 with EMC VNX8000 cabinet elevations vary based on the specific configuration requirements. These are provided for sample purposes only. For specifications for a specific Vblock 340 design, consult your varchitect. Vblock 340 with EMC VNX8000 front view 69

70 Vblock 340 Gen 3.2 Architecture Overview Sample configurations Vblock 340 with EMC VNX8000 rear view 70

71 Sample configurations Vblock 340 Gen 3.2 Architecture Overview Vblock 340 with EMC VNX8000 cabinet 1 71

72 Vblock 340 Gen 3.2 Architecture Overview Sample configurations Vblock 340 with EMC VNX8000 cabinet 2 72

73 Sample configurations Vblock 340 Gen 3.2 Architecture Overview Vblock 340 with EMC VNX8000 cabinet 3 73

74 Vblock 340 Gen 3.2 Architecture Overview Sample configurations Vblock 340 with EMC VNX8000 cabinet 4 74

75 Sample configurations Vblock 340 Gen 3.2 Architecture Overview Vblock 340 with EMC VNX8000 cabinet 5 Sample Vblock System 340 with EMC VNX5800 Vblock 340 with EMC VNX5800 cabinet elevations vary based on the specific configuration requirements. These are provided for sample purposes only. For specifications for a specific Vblock 340 design, consult your varchitect. 75

76 Vblock 340 Gen 3.2 Architecture Overview Sample configurations Vblock 340 with EMC VNX5800 front view 76

77 Sample configurations Vblock 340 Gen 3.2 Architecture Overview Vblock 340 with EMC VNX5800 rear view 77

78 Vblock 340 Gen 3.2 Architecture Overview Sample configurations Vblock 340 with EMC VNX5800 cabinet 1 78

79 Sample configurations Vblock 340 Gen 3.2 Architecture Overview Vblock 340 with EMC VNX5800 cabinet 2 79

80 Vblock 340 Gen 3.2 Architecture Overview Sample configurations Vblock 340 with EMC VNX5800 cabinet 3 Sample Vblock System 340 with EMC VNX5800 (ACI ready) Vblock 340 with EMC VNX5800 elevations for a cabiner that is Cisco Application Centric Infrastructure (ACI) vary based on the specific configuration requirements. These are provided for sample purposes only. For specifications for a specific Vblock 340 design, consult your varchitect. 80

81 Sample configurations Vblock 340 Gen 3.2 Architecture Overview Vblock 340 with EMC VNX5800 (ACI ready) front view 81

82 Vblock 340 Gen 3.2 Architecture Overview Sample configurations Vblock 340 with EMC VNX5800 (ACI ready) rear view 82

83 Sample configurations Vblock 340 Gen 3.2 Architecture Overview Vblock 340 with EMC VNX5800 (ACI ready) cabinet 1 83

84 Vblock 340 Gen 3.2 Architecture Overview Sample configurations Vblock 340 with EMC VNX5800 (ACI ready) cabinet 2 84

85 Sample configurations Vblock 340 Gen 3.2 Architecture Overview Vblock 340 with EMC VNX5800 (ACI ready) cabinet 3 85

86 Vblock 340 Gen 3.2 Architecture Overview Additional references Additional references Virtualization components This topic provides a description of the virtualization components. Product Description Link to documentation VMware vcenter Server VMware vsphere ESXi Provides a scalable and extensible platform that forms the foundation for virtualization management. Virtualized infrastructure for Vblock Systems. Virtualizes all application servers and provides VMware high availability (HA) and dynamic resource scheduling (DRS). vcenter-server/ vsphere/ Compute components This topic provides a description of the compute components. Product Description Link to documentation Cisco UCS 6200 Series Fabric Interconnects Cisco UCS 5100 Series Blade Server Chassis Cisco UCS 2200 Series Fabric Extenders Cisco UCS B-Series Blade Servers Cisco UCS Manager UCS family of line-rate, low-latency, lossless, 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE), and Fibre Channel functions. Provide network connectivity and management capabilities. Chassis that supports up to eight blade servers and up to two fabric extenders in a six rack unit (RU) enclosure. Bring unified fabric into the blade-server chassis, providing up to eight 10 Gbps connections each between blade servers and the fabric interconnect Servers that adapt to application demands, intelligently scale energy use, and offer bestin-class virtualization Provides centralized management capabilities for the Cisco Unified Computing System index.html index.html ps10265/ps10276/ data_sheet_c html index.html index.html 86

87 Additional references Vblock 340 Gen 3.2 Architecture Overview Network components This topic provides a description of the network components. Product Description Link to documentation Cisco Nexus 1000V Series Switches VMware vsphere Distributed Switch (VDS) Cisco MDS 9148 Multilayer Fabric Switch Cisco Nexus 3048 Switch Cisco Nexus 5000 Series Switches Cisco Nexus 7000 Series Switches Cisco Nexus 9396PX Switch A software switch on a server that delivers Cisco VN-Link services to virtual machines hosted on that server. A VMware vcenter-managed software switch that delivers advanced network services to virtual machines hosted on that server. Provides 48 line-rate 16-Gbps ports and offers cost-effective scalability through on-demand activation of ports. Provides local switching that connects transparently to upstream Cisco Nexus switches, creating an end-to-end Cisco Nexus fabric in data centers. Simplifies data center transformation by enabling a standards-based, high-performance unified fabric. A single end-to-end platform designed around infrastructure scalability, operational continuity, and transport flexibility. Provides high scalability, performance, and exceptional energy efficiency in a compact form factor. Designed to support Cisco Application Centric Infrastructure (ACI). ps9902/index.html vsphere/features/distributedswitch.html ps10703/index.html products/switches/nexus-3048-switch/ index.html products/switches/nexus-5000-seriesswitches/index.html products/switches/nexus-7000-seriesswitches/index.html switches/nexus-9396px-switch/ model.html Storage components This topic provides a description of the storage components. Product Description Link to documentation EMC VNX8000, EMC VNX7600, EMC VNX5800, EMC VNX5600, EMC VNX5400 storage arrays High-performing unified storage with unsurpassed simplicity and efficiency, optimized for virtual applications. vnx-series.htm 87

88 About VCE VCE accelerates the adoption of converged infrastructure and cloud-based computing models that dramatically reduce the cost of IT while improving time to market for enterprises and service providers globally. Through its leading Vblock Systems, VCE delivers the industry's only true converged infrastructure, leveraging Cisco compute and network technology, EMC storage and data protection, and VMware virtualization and virtualization management. VCE solutions are available through an extensive partner network and cover horizontal applications, vertical industry offerings and application development environments, enabling customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. For more information, go to All rights reserved. VCE, Vblock, VCE Vision, and the VCE logo are registered trademarks or trademarks of VCE Company, LLC. and/or its affiliates in the United States or other countries. All other trademarks used herein are the property of their respective owners. 88

www.vce.com VCE Vblock System 240 Gen 3.1 Architecture Overview

www.vce.com VCE Vblock System 240 Gen 3.1 Architecture Overview www.vce.com VCE Vblock System 240 Gen 3.1 Architecture Overview Document revision 1.1 August 2015 Vblock 240 Architecture Overview Revision history Revision history Date Vblock System Document revision

More information

VCE VBLOCK 300 SERIES OVERVIEW

VCE VBLOCK 300 SERIES OVERVIEW VCE VBLOCK 300 SERIES OVERVIEW Alex Shubov Solutions Architect IaaS Solutions Group May 23 2011 VBLOCK 300 VARIANTS Vblock 300 is a new line of Vblocks based on the EMC VNX series of unified storage arrays

More information

Vblock Infrastructure Platforms 2010 Vblock Platforms Architecture Overview

Vblock Infrastructure Platforms 2010 Vblock Platforms Architecture Overview www.vce.com Vblock Infrastructure Platforms 2010 Vblock Platforms Version 1.3 November 2011 2011 VE ompany, LL. All Rights Reserved. Revision history Revision history Date Version Author Description of

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved. EMC VNX FAMILY 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes

More information

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta Mit Soft- & Hardware zum Erfolg IT-Transformation VCE Converged and Hyperconverged Infrastructure VCE VxRack EMC VSPEX Blue IT-Transformation IT has changed dramatically in last past years The requirements

More information

VCE VBLOCK SYSTEMS DEPLOYMENT AND IMPLEMENTATION: STORAGE EXAM 210-030

VCE VBLOCK SYSTEMS DEPLOYMENT AND IMPLEMENTATION: STORAGE EXAM 210-030 CERTIFIED PROFESSIONAL STUDY GUIDE VCE VBLOCK SYSTEMS DEPLOYMENT AND IMPLEMENTATION: STORAGE EXAM 210-030 Document revision 1.2 December 2014 2014 VCE Company, LLC. All rights reserved. Table Of Contents

More information

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise REFERENCE ARCHITECTURE PernixData FVP Software and Splunk Enterprise 1 Table of Contents Executive Summary.... 3 Solution Overview.... 4 Hardware Components.... 5 Server and Network... 5 Storage.... 5

More information

Data Centre of the Future

Data Centre of the Future Data Centre of the Future Vblock Infrastructure Packages: Accelerating Deployment of the Private Cloud Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com 1 IT is undergoing a transformation

More information

Introduction to the EMC VNX2 Series

Introduction to the EMC VNX2 Series White Paper Introduction to the EMC VNX2 Series A Detailed Review Abstract This white paper introduces the EMC VNX 2 series platform. It discusses the different models, new and improved features, and key

More information

DCICT: Introducing Cisco Data Center Technologies

DCICT: Introducing Cisco Data Center Technologies DCICT: Introducing Cisco Data Center Technologies Description DCICN and DCICT will introduce the students to the Cisco technologies that are deployed in the Data Center: unified computing, unified fabric,

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010

More information

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track** Course: Duration: Price: $ 4,295.00 Learning Credits: 43 Certification: Implementing and Troubleshooting the Cisco Cloud Infrastructure Implementing and Troubleshooting the Cisco Cloud Infrastructure**Part

More information

Cisco Solution for EMC VSPEX End-User Computing

Cisco Solution for EMC VSPEX End-User Computing Reference Architecture Guide Cisco Solution for EMC VSPEX End-User Computing Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Cisco Unified Computing System, Cisco Nexus

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

EMC VNXe HIGH AVAILABILITY

EMC VNXe HIGH AVAILABILITY White Paper EMC VNXe HIGH AVAILABILITY Overview Abstract This white paper discusses the high availability (HA) features in the EMC VNXe system and how you can configure a VNXe system to achieve your goals

More information

LENOVO EMC VNX5150 STORAGE ARRAY SPECIFICATIONS

LENOVO EMC VNX5150 STORAGE ARRAY SPECIFICATIONS LENOVO EMC VNX5150 STORAGE ARRAY SPECIFICATIONS Lenovo EMC VNX5150 storage systems deliver uncompromising scalability and flexibility for the mid-tier while providing market-leading simplicity and efficiency

More information

VXRACK SYSTEM 1032. Product Overview DATA SHEET

VXRACK SYSTEM 1032. Product Overview DATA SHEET vce.com DATA SHEET VXRACK SYSTEM 1032 Product Overview VCE adds rackscale hyper-converged offerings to the industry s broadest converged infrastructure system portfolio. The VxRack System 1000 series is

More information

Unified Computing Systems

Unified Computing Systems Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified

More information

ANZA Formación en Tecnologías Avanzadas

ANZA Formación en Tecnologías Avanzadas Temario INTRODUCING CISCO DATA CENTER TECHNOLOGIES (DCICT) DCICT is the 2nd of the introductory courses required for students looking to achieve the Cisco Certified Network Associate certification. This

More information

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity

More information

RFP-MM-1213-11067 Enterprise Storage Addendum 1

RFP-MM-1213-11067 Enterprise Storage Addendum 1 Purchasing Department August 16, 2012 RFP-MM-1213-11067 Enterprise Storage Addendum 1 A. SPECIFICATION CLARIFICATIONS / REVISIONS NONE B. REQUESTS FOR INFORMATION Oracle: 1) What version of Oracle is in

More information

VBLOCK TM INFRASTRUCTURE PLATFORMS: A TECHNICAL OVERVIEW

VBLOCK TM INFRASTRUCTURE PLATFORMS: A TECHNICAL OVERVIEW VBLOCK TM INFRASTRUCTURE PLATFORMS: A TECHNICAL OVERVIEW Executive Summary Cloud computing provides a flexible, shared pool of preconfigured and integrated computing resources that enables organizations

More information

CCNA DATA CENTER BOOT CAMP: DCICN + DCICT

CCNA DATA CENTER BOOT CAMP: DCICN + DCICT CCNA DATA CENTER BOOT CAMP: DCICN + DCICT COURSE OVERVIEW: In this accelerated course you will be introduced to the three primary technologies that are used in the Cisco data center. You will become familiar

More information

UCS M-Series Modular Servers

UCS M-Series Modular Servers UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend

More information

Cisco Solution for EMC VSPEX End-User Computing

Cisco Solution for EMC VSPEX End-User Computing Reference Architecture Cisco Solution for EMC VSPEX End-User Computing Citrix XenDesktop 5.6 with VMware vsphere 5 for 1000 Virtual Desktops Enabled by Cisco Unified Computing System, Cisco Nexus Switches,Citrix

More information

Cisco UCS B420 M3 Blade Server

Cisco UCS B420 M3 Blade Server Data Sheet Cisco UCS B420 M3 Blade Server Product Overview The Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS B-Series Blade Servers and C-Series Rack Servers with networking and storage

More information

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp

More information

Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led

Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led Course Description Understanding Cisco Cloud Fundamentals (CLDFND) v1.0 is a five-day instructor-led training course that is designed

More information

Cisco Unified Computing System and EMC VNXe3300 Unified Storage System

Cisco Unified Computing System and EMC VNXe3300 Unified Storage System Cisco Unified Computing System and EMC VNXe3300 Unified Storage System An Ideal Solution for SMB Server Consolidation White Paper January 2011, Revision 1.0 Contents Cisco UCS C250 M2 Extended-Memory Rack-Mount

More information

CONVERGED INFRASTRUCTURE SOLUTION FOR MICROSOFT SHAREPOINT, LYNC, AND EXCHANGE ON VCE VBLOCK SYSTEM 340

CONVERGED INFRASTRUCTURE SOLUTION FOR MICROSOFT SHAREPOINT, LYNC, AND EXCHANGE ON VCE VBLOCK SYSTEM 340 CONVERGED INFRASTRUCTURE SOLUTION FOR MICROSOFT SHAREPOINT, LYNC, AND EXCHANGE ON VCE VBLOCK SYSTEM 340 EMC Solutions Abstract This white paper provides a detailed reference architecture of Microsoft messaging

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

VCE Technology Extensions for EMC Storage Product Guide

VCE Technology Extensions for EMC Storage Product Guide www.vce.com VCE Technology Extensions for EMC Storage Product Guide Document revision 1.1 March 2016 Technology Extensions for EMC Storage Product Guide Revision history Revision history Date Document

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

E4 UNIFIED STORAGE powered by Syneto

E4 UNIFIED STORAGE powered by Syneto E4 UNIFIED STORAGE powered by Syneto THE E4 UNIFIED STORAGE (US) SERIES POWERED BY SYNETO From working in the heart of IT environment and with our major customers coming from Research, Education and PA,

More information

Springpath Data Platform with Cisco UCS Servers

Springpath Data Platform with Cisco UCS Servers Springpath Data Platform with Cisco UCS Servers Reference Architecture March 2015 SPRINGPATH DATA PLATFORM WITH CISCO UCS SERVERS Reference Architecture 1.0 Introduction to Springpath Data Platform 1 2.0

More information

DATA CENTRE TECHNOLOGIES & SERVICES

DATA CENTRE TECHNOLOGIES & SERVICES DATA CENTRE TECHNOLOGIES & SERVICES RE-Solution Data Ltd Reach Recruit Resolve Refine 170 Greenford Road Harrow Middlesex HA1 3QX T +44 (0) 8450 031323 EXECUTIVE SUMMARY The purpose of a data centre is

More information

Cisco Unified Communications on the Cisco Unified Computing System

Cisco Unified Communications on the Cisco Unified Computing System Cisco Unified Communications on the Cisco Unified Computing System Cisco is introducing software application versions from the Cisco Unified Communications portfolio (Versions 8.0(2) and later) that are

More information

- Brazoria County on coast the north west edge gulf, population of 330,242

- Brazoria County on coast the north west edge gulf, population of 330,242 TAGITM Presentation April 30 th 2:00 3:00 slot 50 minutes lecture 10 minutes Q&A responses History/Network core upgrade Session Outline of how Brazoria County implemented a virtualized platform with a

More information

VCE Vision Intelligent Operations Version 2.5 Technical Overview

VCE Vision Intelligent Operations Version 2.5 Technical Overview Revision history www.vce.com VCE Vision Intelligent Operations Version 2.5 Technical Document revision 2.0 March 2014 2014 VCE Company, 1 LLC. Revision history VCE Vision Intelligent Operations Version

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

HP AppSystem for SAP HANA

HP AppSystem for SAP HANA Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ

More information

Implementing Cisco Data Center Unified Computing (DCUCI)

Implementing Cisco Data Center Unified Computing (DCUCI) Certification CCNP Data Center Implementing Cisco Data Center Unified Computing (DCUCI) 5 days Implementing Cisco Data Center Unified Computing (DCUCI) is designed to serve the needs of engineers who implement

More information

Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms

Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms Intel Cloud Builders Guide Intel Xeon Processor-based Servers VCE* Vblock* Infrastructure-as-a-Service Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms VCE* Vblock* Infrastructure-as-a-Service

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication

More information

INTRODUCTION TO THE EMC VNX SERIES

INTRODUCTION TO THE EMC VNX SERIES White Paper INTRODUCTION TO THE EMC VNX SERIES VNX5100, VNX5300, VNX5500, VNX5700, & VNX7500 A Detailed Review Abstract This white paper introduces the EMC VNX series unified platform. It discusses the

More information

Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage, Cisco Unified Computing System, and Microsoft Hyper-V

Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage, Cisco Unified Computing System, and Microsoft Hyper-V Chapte 1: Introduction Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage, Cisco Unified Computing System, and Microsoft Hyper-V A Detailed Review EMC Information Infrastructure

More information

EMC BUSINESS RECOVERY SOLUTION FOR MEDITECH

EMC BUSINESS RECOVERY SOLUTION FOR MEDITECH White Paper EMC BUSINESS RECOVERY SOLUTION FOR MEDITECH VMware vsphere, EMC VMAXe, Data Domain, NetWorker NMMEDI, FAST VP, Faster backup and recovery of MEDITECH Client Server 6.0 in minutes or hours,

More information

Cisco SmartPlay Select. Cisco Global Data Center Promotional Program

Cisco SmartPlay Select. Cisco Global Data Center Promotional Program Cisco SmartPlay Select Cisco Global Data Center Promotional Program SmartPlay Select Program Program Goals and Benefits UCS Promotional offers to accelerate new UCS customers acquisition by showcase Cisco

More information

EMC VNX-F ALL FLASH ARRAY

EMC VNX-F ALL FLASH ARRAY EMC VNX-F ALL FLASH ARRAY Purpose-built for price, density & speed ESSENTIALS Incredible scale & density with up to 172 TB usable flash capacity in 6U @ 28.63 TB/U Consistent high performance up to 400K

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere Test Validation Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere Author:, Sr. Partner, Evaluator Group April 2013 Enabling you to make the best technology decisions 2013 Evaluator Group, Inc.

More information

Block based, file-based, combination. Component based, solution based

Block based, file-based, combination. Component based, solution based The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates

More information

UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment

UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment DATASHEET TM NST6000 UNIFIED HYBRID STORAGE Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment UNIFIED The Nexsan NST6000 unified hybrid storage appliance is ideal for

More information

Cisco Unified Computing System with NetApp Storage for SAP HANA

Cisco Unified Computing System with NetApp Storage for SAP HANA Data Sheet Cisco Unified Computing System with NetApp Storage for SAP HANA Introduction SAP HANA The SAP High-Performance Analytic Appliance (HANA) is a new non-intrusive hardware and software solution

More information

nexsan NAS just got faster, easier and more affordable.

nexsan NAS just got faster, easier and more affordable. nexsan E5000 STORAGE SYSTEMS NAS just got faster, easier and more affordable. Overview The Nexsan E5000 TM, a part of Nexsan s Flexible Storage Platform TM, is Nexsan s family of NAS storage systems that

More information

Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager

Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager What You Will Learn This document describes the operational benefits and advantages of firmware provisioning with Cisco UCS Manager

More information

Addendum No. 1 to Packet No. 28-13 Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department

Addendum No. 1 to Packet No. 28-13 Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department Addendum No. 1 to Packet No. 28-13 Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department The following clarifications, modifications and/or revisions to the above project shall

More information

VxRACK : L HYPER-CONVERGENCE AVEC L EXPERIENCE VCE JEUDI 19 NOVEMBRE 2015. Jean-Baptiste ROBERJOT - VCE - Software Defined Specialist

VxRACK : L HYPER-CONVERGENCE AVEC L EXPERIENCE VCE JEUDI 19 NOVEMBRE 2015. Jean-Baptiste ROBERJOT - VCE - Software Defined Specialist VxRACK : L HYPER-CONVERGENCE AVEC L EXPERIENCE VCE JEUDI 19 NOVEMBRE 2015 Jean-Baptiste ROBERJOT - VCE - Software Defined Specialist Who is VCE Today? #1 Market Share & Gartner MQ position 96% Customer

More information

NET ACCESS VOICE PRIVATE CLOUD

NET ACCESS VOICE PRIVATE CLOUD Page 0 2015 SOLUTION BRIEF NET ACCESS VOICE PRIVATE CLOUD A Cloud and Connectivity Solution for Hosted Voice Applications NET ACCESS LLC 9 Wing Drive Cedar Knolls, NJ 07927 www.nac.net Page 1 Table of

More information

EMC VSPEX END-USER COMPUTING SOLUTION

EMC VSPEX END-USER COMPUTING SOLUTION Reference Architecture EMC VSPEX END-USER COMPUTING SOLUTION Citrix XenDesktop 5.6 with VMware vsphere 5 for 500 Virtual Desktops Enabled by Citrix XenDesktop 5.6, VMware vsphere 5, EMC VNX5300, and EMC

More information

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads White Paper Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads What You Will Learn Designing an enterprise-class Microsoft SharePoint Server 2013 environment presents

More information

Cisco UCS B-Series M2 Blade Servers

Cisco UCS B-Series M2 Blade Servers Cisco UCS B-Series M2 Blade Servers Cisco Unified Computing System Overview The Cisco Unified Computing System is a next-generation data center platform that unites compute, network, storage access, and

More information

Smart Storage and Modern Data Protection Built for Virtualization

Smart Storage and Modern Data Protection Built for Virtualization Smart Storage and Modern Data Protection Built for Virtualization Dot Hill Storage Arrays and Veeam Backup & Replication Software offer the winning combination. Veeam and Dot Hill Solutions Introduction

More information

DVS Enterprise. Reference Architecture. VMware Horizon View Reference

DVS Enterprise. Reference Architecture. VMware Horizon View Reference DVS Enterprise Reference Architecture VMware Horizon View Reference THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information

EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K

EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K SPECIFICATION SHEET EMC VMAX3 FAMILY - VMAX 100K, 200K, 400K The EMC VMAX3 TM family delivers the latest in Tier-1 scale-out multi-controller architecture with unmatched consolidation and efficiency for

More information

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads White Paper Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads What You Will Learn Occam s razor (according to Wikipedia) is a principle that generally recommends,

More information

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution Open-E Data Storage Software and Intel Modular Server a certified virtualization solution Contents 1. New challenges for SME IT environments 2. Open-E DSS V6 and Intel Modular Server: the ideal virtualization

More information

A Platform Built for Server Virtualization: Cisco Unified Computing System

A Platform Built for Server Virtualization: Cisco Unified Computing System A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease

More information

EMC VNXe3150, VNXe3300 UNIFIED STORAGE SYSTEMS

EMC VNXe3150, VNXe3300 UNIFIED STORAGE SYSTEMS EMC, UNIFIED STORAGE SYSTEMS EMC VNXe series unified storage systems deliver exceptional flexibility for the smallto-medium-business, combining a unique, application-driven management environment with

More information

The Technical Infrastructure of Data Centres

The Technical Infrastructure of Data Centres The Technical Infrastructure of Data Centres Best Practice Document Produced by the CESNET-led Working Group on Network Monitoring (CBPD 121) Author: Martin Pustka November 2012 TERENA 2012. All rights

More information

www.vce.com VCE Vision Intelligent Operations Version 2.6 Technical Overview

www.vce.com VCE Vision Intelligent Operations Version 2.6 Technical Overview www.vce.com VCE Vision Intelligent Operations Version 2.6 Technical Overview Document revision 2.0 April 2015 VCE Vision Intelligent Operations Version 2.6 Technical Overview Revision history Revision

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org

IOmark-VM. DotHill AssuredSAN Pro 5000. Test Report: VM- 130816-a Test Report Date: 16, August 2013. www.iomark.org IOmark-VM DotHill AssuredSAN Pro 5000 Test Report: VM- 130816-a Test Report Date: 16, August 2013 Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark

More information

Maxta Storage Platform Enterprise Storage Re-defined

Maxta Storage Platform Enterprise Storage Re-defined Maxta Storage Platform Enterprise Storage Re-defined WHITE PAPER Software-Defined Data Center The Software-Defined Data Center (SDDC) is a unified data center platform that delivers converged computing,

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

VCE Converged Infrastructure Platforms

VCE Converged Infrastructure Platforms VCE Converged Infrastructure Platforms Sasho Tasevski Advisory SE, EMC² ABOUT EMC EMC Corporation is a global leader in enabling businesses and service providers to transform their operations and deliver

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

Cisco UCS B460 M4 Blade Server

Cisco UCS B460 M4 Blade Server Data Sheet Cisco UCS B460 M4 Blade Server Product Overview The new Cisco UCS B460 M4 Blade Server uses the power of the latest Intel Xeon processor E7 v2 product family to add new levels of performance

More information

Introduction to the EMC VNXe3200

Introduction to the EMC VNXe3200 White Paper Abstract This white paper introduces the architecture and functionality of the EMC VNXe3200. This paper also discusses some of the advanced features of the VNXe3200 storage system. July 2015

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation Cisco Cloud Essentials for EngineersV1.0 LESSON 1 Cloud Architectures TOPIC 1 Cisco Data Center Virtualization and Consolidation 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential

More information

Networking Solutions for Storage

Networking Solutions for Storage Networking Solutions for Storage Table of Contents A SAN for Mid-Sized Businesses... A Total Storage Solution... The NETGEAR ReadyDATA RD 0... Reference Designs... Distribution Layer... Access LayeR...

More information

The safer, easier way to help you pass any IT exams. Exam : 000-115. Storage Sales V2. Title : Version : Demo 1 / 5

The safer, easier way to help you pass any IT exams. Exam : 000-115. Storage Sales V2. Title : Version : Demo 1 / 5 Exam : 000-115 Title : Storage Sales V2 Version : Demo 1 / 5 1.The IBM TS7680 ProtecTIER Deduplication Gateway for System z solution is designed to provide all of the following EXCEPT: A. ESCON attach

More information

FlexPod for VMware The Journey to Virtualization and the Cloud

FlexPod for VMware The Journey to Virtualization and the Cloud FlexPod for VMware The Journey to Virtualization and the Cloud Presented Jointly by Simac Technik ČR with Cisco, NetApp, and VMware 2010 NetApp, Cisco, and VMware. All Rights Reserved. C97-633489-00 One

More information

Dynamically unify your data center Dell Compellent: Self-optimized, intelligently tiered storage

Dynamically unify your data center Dell Compellent: Self-optimized, intelligently tiered storage Dell Fluid Data architecture Dynamically unify your data center Dell Compellent: Self-optimized, intelligently tiered storage Dell believes that storage should help you spend less while giving you the

More information

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation Cisco Nexus Family Provides a Granular, Cost-Effective Path for Data Center Evolution What You Will Learn As businesses move

More information

VBLOCK SOLUTION FOR KNOWLEDGE WORKER ENVIRONMENTS WITH VMWARE VIEW 4.5

VBLOCK SOLUTION FOR KNOWLEDGE WORKER ENVIRONMENTS WITH VMWARE VIEW 4.5 Table of Contents www.vce.com VBLOCK SOLUTION FOR KNOWLEDGE WORKER ENVIRONMENTS WITH VMWARE VIEW 4.5 Version 2.0 February 2013 1 Copyright 2013 VCE Company, LLC. All Rights Reserved.

More information

UCS Network Utilization Monitoring: Configuration and Best Practice

UCS Network Utilization Monitoring: Configuration and Best Practice UCS Network Utilization Monitoring: Configuration and Best Practice Steve McQuerry Technical Marketing Engineer Unified Computing Systems Cisco Systems, Inc. Document Version 1.0 1 Copyright 2013 Cisco

More information

VBLOCK SOLUTION FOR SAP APPLICATION HIGH AVAILABILITY

VBLOCK SOLUTION FOR SAP APPLICATION HIGH AVAILABILITY Vblock Solution for SAP Application High Availability Table of Contents www.vce.com VBLOCK SOLUTION FOR SAP APPLICATION HIGH AVAILABILITY Version 2.0 February 2013 1 Copyright 2013 VCE Company, LLC. All

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

ClearPath Storage Update Data Domain on ClearPath MCP

ClearPath Storage Update Data Domain on ClearPath MCP ClearPath Storage Update Data Domain on ClearPath MCP Ray Blanchette Unisys Storage Portfolio Management Jose Macias Unisys TCIS Engineering September 10, 2013 Agenda VNX Update Customer Challenges and

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information