Large Unstructured Data Storage in a Small Datacenter Footprint: Cisco UCS C3160 and Red Hat Gluster Storage 500-TB Solution



Similar documents
Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Unified Computing Systems

How To Build A Cisco Ukcsob420 M3 Blade Server

Boost Database Performance with the Cisco UCS Storage Accelerator

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Cisco Unified Computing System and EMC VNXe3300 Unified Storage System

Platfora Big Data Analytics

I/O Performance of Cisco UCS M-Series Modular Servers with Cisco UCS M142 Compute Cartridges

Cisco-EMC Microsoft SQL Server Fast Track Warehouse 3.0 Enterprise Reference Configurations. Data Sheet

Scala Storage Scale-Out Clustered Storage White Paper

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

HP reference configuration for entry-level SAS Grid Manager solutions

Veeam Backup & Replication Enterprise Plus Powered by Cisco UCS: Reliable Data Protection Designed for Virtualized Environments

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis

Integrated Grid Solutions. and Greenplum

UCS M-Series Modular Servers

Pentaho High-Performance Big Data Reference Configurations using Cisco Unified Computing System

How to Choose your Red Hat Enterprise Linux Filesystem

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

HP RA for Red Hat Storage Server on HP ProLiant SL4540 Gen8 Server

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

Cisco UCS B200 M3 Blade Server

Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack

RED HAT STORAGE SERVER An introduction to Red Hat Storage Server architecture

RED HAT GLUSTER STORAGE ON THE HP PROLIANT SL4540 GEN8 SERVER Deploying open scalable software-defined storage

Get More Scalability and Flexibility for Big Data

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack

Introduction to Gluster. Versions 3.0.x

3 Red Hat Enterprise Linux 6 Consolidation

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

RFP-MM Enterprise Storage Addendum 1

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

RED HAT STORAGE SERVER TECHNICAL OVERVIEW

Enterprise Cloud Services HOSTED PRIVATE CLOUD

Isilon IQ Scale-out NAS for High-Performance Applications

Hedvig Distributed Storage Platform with Cisco UCS

Parallels Cloud Storage

Support a New Class of Applications with Cisco UCS M-Series Modular Servers

Evaluation of Dell PowerEdge VRTX Shared PERC8 in Failover Scenario

White Paper. Recording Server Virtualization

How To Evaluate Netapp Ethernet Storage System For A Test Drive

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software

SX1012: High Performance Small Scale Top-of-Rack Switch

Cisco UCS B440 M2 High-Performance Blade Server

RED HAT STORAGE PORTFOLIO OVERVIEW

Cisco Unified Data Center Solutions for MapR: Deliver Automated, High-Performance Hadoop Workloads

How To Write An Article On An Hp Appsystem For Spera Hana

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

Networking Solutions for Storage

Cisco UCS B-Series M2 Blade Servers

nexsan NAS just got faster, easier and more affordable.

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

How To Build A Cisco Uniden Computing System

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Cisco Solution for EMC VSPEX Server Virtualization

CONSOLIDATING THE STORAGE TIER FOR PERFORMANCE AND SCALABILITY Data Management and Protection with CommVault Simpana and Red Hat Storage

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

Cisco UCS C24 M3 Server

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

A Platform Built for Server Virtualization: Cisco Unified Computing System

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V

FlexPod for VMware The Journey to Virtualization and the Cloud

Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd

White Paper. Cisco and Greenplum Partner to Deliver High-Performance Hadoop Reference Configurations

Big data management with IBM General Parallel File System

SAN Conceptual and Design Basics

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads

IBM System Storage DS5020 Express

Accelerate Cloud Initiatives with Cisco UCS and Ubuntu OpenStack

Introduction to NetApp Infinite Volume

SummitStack in the Data Center

VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE IN PHYSICAL AND VIRTUAL ENVIRONMENTS

What Is Microsoft Private Cloud Fast Track?

High Availability Databases based on Oracle 10g RAC on Linux

High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology

Transcription:

Performance White Paper Large Unstructured Data Storage in a Small Datacenter Footprint: Cisco UCS C3160 and Red Hat Gluster Storage 500-TB Solution Executive Summary Today, companies face scenarios that require an IT architecture that can hold hundreds of terabytes of unstructured data. Storage-intensive enterprise workloads can encompass the following: Archiving and backup, including backup images and near-online (nearline) archives Rich media content storage and delivery, such as videos, images, and audio files Enterprise drop-box Cloud and business applications, including log files, and RFID and other machine-generated data Virtual and cloud infrastructure, such as virtual machine images Emerging workloads, such as co-resident applications This document is intended to assist organizations who are seeking an ultra-dense, high-throughput solution that can store a large amount of unstructured data in a small amount of rack space. The paper provides testing results that showcase how Cisco Unified Computing System (Cisco UCS ) C3160 Rack Server and Red Hat Gluster Storage, along with Cisco Nexus 9000 Series Switches, can be optimized to serve in these scenarios. It includes system setup information, testing methodology, and results for a 500-TB solution. Initial tests indicate that the Cisco / Red Hat solution offers an ultra-dense, scalable, high-throughput solution that can store a large amount of unstructured data that is easy to manage. Solution Overview Cisco Unified Computing System (Cisco UCS ) C3160 Rack Server and Red Hat Gluster Storage, combined with Cisco Nexus 9000 Series Switches, provide a complete 500-TB solution for high-volume, unstructured data storage. Cisco UCS C3160 Rack Server The Cisco UCS C3160 Rack Server is a modular, high-density rack server ideal for service providers, enterprises, and industry-specific environments that require highly scalable computing with high-capacity local storage. Designed for a new class of cloud-scale applications, it is simple to deploy and excellent for software-defined storage environments, unstructured data repositories, Microsoft Exchange, backup and archival, media streaming, and content distribution. Based on the Intel Xeon processor E5-2600 v2 series, the server offers up to 360 TB of local storage in a compact four-rack-unit (4RU) form factor. And, the server helps organizations achieve the highest levels of data availability because its hard-disk drives are individually hot-swappable and include built-in enterprise-class Redundant Array of Independent Disks (RAID). 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 22

Unlike typical high-density rack servers that require extended-depth racks, the Cisco UCS C3160 fits comfortably in a standard-depth rack, such as the Cisco UCS R42610. Cisco Nexus 9300 Application workloads that are deployed across a mix of virtualized and nonvirtualized server and storage infrastructure require a network infrastructure that provides consistent connectivity, security, and visibility across a range of bare-metal, virtualized, and cloud computing environments. The Cisco Nexus 9000 Series Switches provide a flexible, agile, low-cost, application-centric infrastructure (ACI) and include both modular and fixed-port switches that are designed to overcome the challenges of workloads that span a mix of virtualized and nonvirtualized server and storage infrastructure. Cisco Nexus 9300 fixed-port switches are designed for top-of-rack (ToR) and middle-of-row (MoR) deployment in data centers that support enterprise applications, service provider hosting, and cloud computing environments. The switches are Layer 2 and Layer 3 nonblocking 10 and 40 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE) capable switches with up to 2.56 Tbps of internal bandwidth. These high-density, nonblocking, low-power-consuming switches can be used in ToR, MoR, and end-of-row (EoR) deployments in enterprise data centers, service provider facilities, and large virtualized and cloud computing environments. Cisco Nexus 9300 offers industry-leading density and performance with flexible port configurations that can support existing copper and fiber cabling. With 1/10GBASE-T support, the switches can deliver 10 Gigabit Ethernet over existing copper cabling, which enables a low-cost upgrade from Cisco Catalyst 6500 Series Switches when the switches are used in an MoR or EoR configuration. Red Hat Gluster Storage Red Hat Gluster Storage is open, software-defined scale-out storage that easily manages unstructured data for physical, virtual, and cloud environments. Combining both file and object storage with a scale-out architecture (see Figure 1), it is designed to cost-effectively store and manage petabyte-scale data growth. It also delivers a continuous storage fabric across physical, virtual, and cloud resources so organizations can transform their big, semi-structured, and unstructured data from a burden to an asset. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 22

Figure 1. Red Hat Gluster Storage Scales Up and Out Built on the industry-leading Red Hat Enterprise Linux operating system, Red Hat Gluster Storage offers costeffective and highly available storage without scale or performance compromises. Organizations can use it to avoid storage silos by enabling global access to data through multiple file and object protocols. And, it works seamlessly with Cisco UCS C3160 servers. Table 1. Red Hat Gluster Storage Features Single global namespace Aggregates disk and memory resources into a single trusted storage pool. Replication Snapshots Elastic hashing algorithm Easy online management Supports synchronous replication within a data center and asynchronous replication for disaster recovery. Help assure data protection through cluster-wide filesystem snapshots that are user-accessible for easy recovery of files. Eliminates performance bottlenecks and single points of failure because there is no metadata server layer. Web-based management console Powerful and intuitive command-line interface for Linux administrators Monitoring (Nagios-based) Expand and shrink storage capacity without downtime Industry-standard client support Network File System (NFS), Server Message Block (SMB) for file-based access OpenStack Swift for object access GlusterFS native client for highly parallelized access System Specifications This 500-TB solution includes Cisco UCS C3160 Rack Servers, Red Hat Gluster Storage, and Cisco Nexus 9300 switches. For the purpose of running the performance tests, the components of the solution were configured to the specifications described in this section. Figure 2 shows a diagram of the system. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 22

Figure 2. System Diagram Component Configuration Table 2 describes how the components of the solution were configured for the tests. Table 2. Component Component Configuration Configuration Cisco UCS C3160 Rack Server Four Cisco UCS C3160 servers, each configured with Fifty-six 6-TB 7200 rpm disks Two E5-2695 v2 CPUs 256-MB RAM (sixteen 16-GB DDR3 1866 MHz) Operating System Red Hat Enterprise Linux 6.6 Gluster Software Red Hat Gluster Storage 3.0.4 Two UCS C3160 System I/O Controller with single adapter card slot Two Cisco UCS Virtual Interface Card (VIC) 1227 dual port 10-Gbps Enhanced Small Form-Factor Pluggable (SFP+) One Cisco 12G SAS RAID controller with 4-GB flash-backed write cache Connectivity Two Cisco Nexus 9300 48p 1/10G SFP+ with Nexus 9300 Base Cisco NX-OS Release 7.0(3) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 22

RAID Configuration RAID configuration included the following: Each UCS C3160 server included 4 x 14 disk RAID 6 arrays for the Gluster bricks, created using the LSI RAID controller graphical configuration utility. The RAID controller cache was set to write back with battery--backed write cache. Each RAID 6 array featured one virtual disk (72 TB) and each virtual disk had a capacity of 66 TB after formatting with XFS. StorCLI output results are shown in Appendix A. Network Configuration The network was configured as follows: Each UCS C3160 server includes two UCS VIC 1227 adapters. Each VIC 1227 adapter has two 10-Gb ports, with one port connected to Nexus 9300 switch A and the other port connected to Nexus 9300 switch B. This means that each UCS C3160 has two 10-Gb connections (one from each VIC 1227) to each Nexus 9300. The two Nexus 9300 switches have a 160-Gb virtual port channel peer link configured between them, so that both switches can appear as a single logical switch. The four 10-Gb interfaces from each C3160 are configured as a single 40 Gigabit Ethernet interface using a Link Aggregation Control Protocol (LACP) mode 4 802.3ad bond. Output from the ifcfg-bond0 and on one of the Ethernet devices is shown in Appendix B. Red Hat Gluster Storage Installation and Configuration Installation and configuration of the Red Hat Gluster Storage comprised several steps: 1. Install the Red Hat Gluster Storage 3.0.4 ISO image on the Cisco UCS C3160 servers. This release is based on Red Hat Enterprise Linux 6.6 and GlusterFS 3.6. For step-by-step installation and configuration for Red Hat Gluster Storage, visit https://access.redhat.com/site/documentation/en-us/red_hat_storage/ 2. Install Red Hat Enterprise Linux 7.1 on the client servers, following instructions in the Installing Native Client in the Red Hat Gluster Storage Administration guide. 3. Create storage bricks, using the rhs-server-init.sh script on all nodes. The rhs-server-init.sh script does the following: Creates a physical volume. Creates a logical volume. Makes the XFS file system on the logical volume. Runs a tuned performance profile for Red Hat Gluster Storage virtualization. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 22

4. Make sure the glusterd daemon is running on each of the Red Hat Storage servers (gluster1, gluster2, gluster3, and gluster4), and then use the gluster peer probe command to create the trusted storage cluster from the gluster1 server: 5. Confirm that all the storage servers are in a connected state using the gluster peer status command: 6. Create the distributed replicated (two-way) volume from the gluster1 server. (See Appendix C for the full volume creation script 2.) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 22

7. Start the volume, and look at the status: 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 22

8. Confirm that all the storage servers are in a connected state. 9. Mount the GlusterFS volume gvol0 on client 1: 10. Repeat these commands on the other fifteen clients. Note: Even though the same Red Hat Gluster Storage server is being used for mounting the volumes, the Native Client protocol has built-in load balancing. The clients use the mount server initially to get the volume information and after that they will contact the individual storage servers directly for accessing the data. The data request does not have to go through the mount server. Test Methodology and Results The testing environment was set up to include industry-standard tools to measure read/write performance and to provide benchmarking for performance across a specific cluster and performance of distributed storage. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 22

Driver Setup Eight UCS B200 M4 servers were used to drive I/O to the 500 TB Gluster cluster. Each server was equipped as follows: Two E5-26XX CPUs 256-GB RAM Cisco VIC 1240 Capable of 40 Gbps throughput The UCS 5100 chassis housing the B200 M4 servers includes two 8-port UCS 2208XP IO modules connected to two UCS 6248UP fabric interconnects. The fabric interconnects have 16 uplinks to the Nexus 9300. To drive I/O, each UCS B200 M4 server also ran a hypervisor and two virtual switches, with a dedicated 10 Gigabit Ethernet network interface. Each virtual machine was running Red Hat Enterprise Linux 7.1 and the I/O benchmark tools discussed below. Sequential I/O Benchmarking (IOzone) IOzone is a filesystem benchmark tool that is useful for performing a broad filesystem analysis of a computer platform. In this case, IOzone was used to test the sequential read/write performance of the GlusterFS replicated volume. IOzone s cluster mode option -+m is particularly well-suited for distributed storage testing because testers can start many worker threads from various client systems in parallel, targeting the GlusterFS volume. The 8 times 2 distributed replicated volume was tested using the following command with a total of 128 threads of execution across 16 client systems (eight threads per client): # iozone -+m ${IOZONE_CONFIG_FILENAME} -i ${IOZONE_TEST} -C -w -+n -s ${IOZONE_FILESZ} -r ${IOZONE_RECORDSZ} -+z -c -e -t ${TEST_THREADS} The following parameters were used in the IOzone command line: -+m specifies cluster testing mode. IOZONE_CONFIG_ FILENAME is the IOzone config file for the cluster mode. The file lists the client host names and the associated GlusterFS mount point. The IOZONE_TEST parameter was varied to cover the Sequential Read and Sequential Write test cases. IOZONE_FILESZ was 8 GB. 8-GB file transfers were used as a representative workload for the Content Cloud reference architecture testing. IOZONE_RECORDSZ was varied between 64 KB and 16 MB, using record sizes that were powers of two. This range of record sizes was meant to characterize the effect of record or block size (in client file requests) on I/O performance. TEST_THREADS specifies the number of threads or processes that are active during the measurement. 128 threads were used across 16 client systems (eight threads per client). Figure 3 shows up to 12 GB per second sequential read throughput and up to 4 GB per second replicated write results for typical sequential I/O sizes. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 22

Figure 3. Sequential Throughput (IOzone) Performance Benchmarking Across a Cluster (SmallFile) The SmallFile benchmark is a Python-based small-file distributed POSIX workload generator that can be used to measure performance for a variety of small-file and file metadata-intensive workloads across an entire cluster. This benchmark was used to complement the IOzone benchmark for measuring the performance of large-file workloads. Although the SmallFile benchmark kit supports multiple operations, only the create operation was used to create files and write data to them using varying file sizes ranging from 10 KB to 2 GB. SmallFile benchmark parameters included: The create operation to create a file and write data to it Eight threads on each of the 16 clients (client 1 through client16) The file size of 10 KB and each thread processes 100,000 files Each thread pauses for 10 microseconds before starting the next file operation, and the response time is the file operation duration, measured to microsecond resolution For files larger than 1 MB, a record size of 1024 KB is used to determine how much data was transferred in a single write system call The SmallFile command line is as follows: The benchmark returns the number of files processed per second and the rate that the application transferred data in megabytes per second. SmallFile test results show up to 4,300 tiny files per second being created simultaneously and up to 3.3 GBytes/sec write throughput for the larger files (Figure 4). 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 22

Figure 4. File CREATE Operations in the SmallFile Tool Random I/O Performance (FIO) Flexible I/O (FIO) is an industry-standard storage performance benchmark that has been used primarily for single server testing; however, with the recent addition of client/server options, it can also be used in a distributed storage environment. The front end and back end of FIO can be run separately where the FIO server can generate an I/O workload on the system under test while being controlled from another system. The --server option is used to launch FIO in a request listening mode. The --client option is used to pass a list of test servers along with the FIO profile to define the workload to be generated. FIO was used to determine the random I/O performance using smaller block sizes (ranging from 4 KB to 32 KB). The FIO profile used on each client driver for the random test are listed in Appendix D. The cluster accommodated about 25,000 random read IOPS at typical sizes with a latency of about 10 ms, as shown in Figure 5. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 22

Figure 5. FIO Random Read Tests The solution achieves about 9500 random write IOPS at typical sizes, as shown in Figure 6. Latency is less than 7 ms using the write cache on the array controllers on the C3160. Figure 6. FIO Random Write Tests 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 22

Conclusion Companies facing scenarios that require support for hundreds of terabytes of unstructured data can use the Cisco/Red Hat Gluster Storage 500-TB solution as a scalable, high-throughput solution that can meet these demands while remaining easy to manage. Test results for performance using FIO, SmallFile, and IOzone show remarkable performance and indicate that this solution is capable of handling storage-intensive enterprise workloads in a dense datacenter footprint. For More Information Cisco UCS C3160 Rack Server Cisco Nexus 9300 switches Red Hat Gluster Storage IOzone test 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 22

Appendix A StorCLI Output 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 22

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 22

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 22

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 22

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 22

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 22

Appendix B: Output from ifcfg-bond0 and One of the Ethernet Devices Appendix C: Gluster Volume Creation Script 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 22

Appendix D FIO profile used on each client driver for the random test 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 22

About Red Hat Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage, and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As a connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. 2015 Cisco and/or its affiliates. All rights reserved. Cisco and the Cisco logo, Catalyst, Cisco Nexus, Cisco Unified Computing System, and Cisco UCS are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, see the Trademarks page on the Cisco website. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. Printed in USA C11-734975-01 08/15 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 22