1 FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011
2 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient Framework (IRF), a method of virtualizing data center switch fabrics for enhanced bandwidth and availability. On multiple large-scale test beds, IRF clearly outperformed existing redundancy mechanisms such as the spanning tree protocol (STP) and the virtual routing redundancy protocol (VRRP). Among the key findings of these tests: Using VMware s vmotion facility, average virtual machine migration time took around 43 seconds on a network running IRF, compared with around 70 seconds with rapid STP IRF virtually doubled network bandwidth compared with STP and VRRP, with much higher throughput rates regardless of frame size IRF converged around failed links, line cards, and systems vastly faster than existing redundancy mechanisms such as STP In the most extreme failover case, STP took 31 seconds to recover after a line card failure; IRF recovered from the same event in 2.5 milliseconds IRF converges around failed network components far faster than HP s 50-millisecond claim This document briefly explains IRF concepts and benefits, and then describes procedures and results for IRF tests involving VMware vmotion; network bandwidth; and resiliency. Introducing IRF IRF consolidates multiple physical switches so that they appear to the rest of the network as a single logical entity. Up to nine switches can comprise this virtual fabric, which runs on HP s high-end switch/routers 1 and can encompass hundreds or thousands of gigabit and 10-gigabit Ethernet ports. IRF offers advantages in terms of simpler network design; ease of management; disaster recovery; performance; and resiliency. A virtual fabric essentially flattens the data center from three layers into one or two using HP Virtual Connect technology, requiring fewer switches. Device configuration and management also becomes simpler. Within an IRF domain, configuration of a single primary switch is all that s needed; the primary switch then distributes relevant configuration and protocol information to other switches in the IRF domain. IRF also supports an in-service software upgrade (ISSU) capability that allows individual switches to be taken offline for upgrades without affecting the rest of the virtual fabric. For disaster recovery, switches within an IRF domain can be deployed across multiple data centers. According to HP, a single IRF domain can link switches up to 70 kilometers (43.5 miles) apart. 1 IRF support is included at no cost on HP 12500, 9500, 7500, 58xx, and 55xx switches.
3 Page3 IRF improves performance and resiliency, as shown in test results described later in this report. A common characteristic of existing data center network designs is their inefficient redundancy mechanisms, such as STP or VRRP. Both these protocols (along with modern versions of STP such as rapid STP and multiple STP) use an active/passive design, where only one pair of interfaces between switches forwards traffic, and all others remain idle until the active link fails. With active/passive mechanisms, half (or more) of all interswitch links sit idle most of the time. Moreover, both STP and VRRP take a relatively long time to recover from link or component faults, typically on the order of seconds. IRF uses an active/active design that enables switches to forward traffic on all ports, all the time. This frees up bandwidth, boosting performance for all applications. Data centers using virtualization benefit especially well from this design, since the additional bandwidth allows virtual machines to be moved faster between hypervisors. IRF s active/active designs also reduce downtime when link, component, or system failures occur. IRF Speeds VMware Performance Over the past few years VMware s vmotion capability has become the killer app for large-scale data centers and cloud computing. The ability to migrate virtual machines between physical hosts with zero downtime is a boon to network managers, but also a challenge. As data centers scale up in size and network managers use vmotion to migrate ever-larger numbers of virtual machines, network performance can become a bottleneck. This is an acute concern for disaster recovery and other highavailability applications, where rapid migration of virtual machines is essential. Network Test and HP engineers constructed a large-scale test bed to compare vmotion performance using IRF and rapid spanning tree protocol (RSTP). With both mechanisms, the goal was to measure the time needed for vmotion migration of 128 virtual machines, each with 8 Gbytes of RAM, running Microsoft SQL Server on Windows Server Before each migration event, test engineers verified maximum memory usage on each VM, ensuring the most stressful possible load in terms of network utilization. The test migrated virtual machines between VMware ESXi hosts running on a total of 32 HP BL460 G7 blade servers. In the RSTP case, the network used a typical active/passive design, with some ports forwarding traffic between access and core switches, and others in blocking state (see Figure 1). In this design, RSTP provides excellent loop prevention but also limits available bandwidth; note that half the inter-switch connections shown here are in blocking state.
4 Page4 Figure 1: VMware vmotion with RSTP By virtualizing the core switching infrastructure, IRF increases network capacity (see Figure 2). Here, all inter-switch ports are available, with no loss in redundancy compared with spanning tree. The core switches appear to the rest of the network as a single logical device. That converged device can forward traffic on all links with all attached access switches. Moreover, IRF can be used together with the link aggregation control protocol (LACP) to add capacity to links between switches (or between switches and servers), again with all ports available all the time. This isn t possible with STP or RSTP since at least half (and possibly more) of all inter-switch links must remain in blocking state at all times.
5 Page5 Figure 2: VMware vmotion with IRF For both RSTP and IRF scenarios, engineers used custom-developed scripts to trigger vmotion migration of 128 virtual machines. In all three trials run, IRF clearly outperformed RSTP in terms of average vmotion migration time (see Figure 3). On average, migrations over RSTP took around 70 seconds to complete, while vmotion over IRF took 43 seconds.
6 Page6 Figure 3: Average vmotion times over RSTP and IRF Boosting Bandwidth: IRF Speeds Throughput It s important to note that actual vmotion times depend on many variables, including server and storage resources, virtual machine configuration, and VMware vsphere parameters. As with all application performance tuning, results can vary from site to site. To determine a more basic metric raw network capacity Network Test ran additional tests to compare bandwidth available with and without IRF. Network Test conducted throughput tests comparing IRF with STP at layer 2 and VRRP at layer 3. 2 Figure 4 below shows the test beds used to compare throughput in the STP and IRF test cases. While both tests involve the same and 5820 switches, note that half the ports between them are in blocking state in the STP test cases (seen in the left side of Figure 4). The IRF configuration makes use of all inter-switch links (see in the right of the figure), with the two switches appearing to the network as a single entity. The test bed for VRRP was similar to that shown here, except that Network Test used 20 gigabit Ethernet connections between each traffic generator and the 5820s to increase emulated host count and ensure more uniform distribution of 2 Network Test did not compare IRF and rapid spanning tree protocol [RSTP] throughput because RSTP results would be identical to those with STP in this particular configuration.
7 Page7 traffic across VRRP connections. Engineers configured the switches in bridge extended mode, which improves performance by allocating additional memory to switching processes. The IRF configuration uses the link aggregation control protocol (LACP) on connections between the and 5820 switches. From the perspective of the 5820 access switches, the link-aggregated connection to the core appears to be a single virtual connection. This allows for interesting network designs in disaster-recovery scenarios, for example with IRF and link aggregation connecting switches in different physical locations. Figure 4: Spanning tree and IRF test beds Network Test followed the procedures described in RFCs 2544 and 2889 to determine system throughput. Engineers configured the Spirent TestCenter traffic generator/analyzer to offer bidirectional traffic in an east/west direction, meaning all frames on the west side of the test bed were destined for the east side and vice-versa. The aggregate load offered to each side was equivalent to 20 Gbit/s of traffic, equal to the theoretical maximum capacity of the access-core links. To measure throughput, engineers offer traffic at varying loads, each for a 60-second duration, to determine the highest rate at which the switches would forward all frames with zero frame loss. As defined in RFC 1242, this is the throughput rate.
8 Page8 Engineers repeated these tests with various frame sizes ranging from 64 bytes (the minimum in Ethernet) through 1,518 bytes (the nominal maximum in Ethernet) through 2,176 bytes (often seen in data centers that use Fibre Channel for storage) through 9,216 (the nonstandard but still widely used jumbo frames common in data centers). Figure 5 below presents throughput results, expressing the throughput rate as a percentage of the theoretical maximum rate. For all frame sizes, IRF nearly doubled channel capacity, delivering near linerate throughput while the active/passive solutions delivered throughput of only 50 percent of channel capacity. Figure 5: Throughput as a percentage of the theoretical maximum Note also that bandwidth utilization nearly doubles with IRF in every test case, regardless of frame length. This validates IRF s ability to deliver far more network bandwidth, which in turn speeds performance for all applications, regardless of traffic profile. The throughput figures presented in Figure 5 are given as percentages of theoretical line rate. As a unit of measurement, throughput itself is actually a rate and not a percentage. Figure 6 below presents the same data from the throughput tests, this time with throughput expressed in frames per second for each test case. Regardless of how it s expressed, IRF provides nearly double the network bandwidth as other layer-2 and layer-3 resiliency mechanisms.
9 Page9 Figure 6: Comparing STP, VRRP, and IRF throughput Faster Failovers: IRF Improves Resiliency While high performance is certainly important, high availability is an even more critical requirement in enterprise networking. This is especially true in the data center, where even a small amount of downtime can mean significant revenue loss and other disruptions. IRF aims to improve network uptime by recovering from link or component failures far faster than mechanisms such as STP, RSTP, or VRRP. Networks running STP can take between seconds to converge following a single link or component failure. Rapid spanning tree and VRRP are newer and faster, but convergence time can still be significant. HP claims IRF will converge in 50 milliseconds. To assess that claim, Network Test used the same test bed as in the STP and IRF throughput tests (see Figure 4, again). This time, engineers intentionally caused a failure, and then derived convergence time by examining frame loss as reported by Spirent TestCenter. Engineers tested three failure modes: Link failure: With traffic active, engineers disconnected the link between the east and east 5820 switches, forcing traffic to be rerouted onto an alternative path
10 Page10 Card failure: With traffic active, engineers pulled an active line card from the east 12500, forcing traffic to be rerouted System failure: With traffic active, engineers cut power to the east 12500, forcing traffic to be rerouted through the west In all cases, Spirent TestCenter offered bidirectional streams of 64-byte frames at exactly 50 percent of line rate throughout the test. This is the most stressful non-overload condition possible; in theory, the system under test is never congested, even during component failure. Thus, any frames dropped during this test were a result of, and only of, path re-computation following a component failure. Figure 7 below compares convergence times for conventional STP with IRF configured for layer-2 operation. For all failure modes, IRF converges vastly faster than STP. Further, IRF converges far faster than HP s 50-ms claim in all cases. In fact, the differences between STP and layer-2 IRF are so large that they cannot be compared on the same scale as with other failover mechanisms. STP convergence times in this test are, if anything, lower than those typically seen in production networks. In this test, with only two sets of interfaces transitioning between forwarding and blocking states, convergence occurred relatively quickly; in production networks with more ports and switches, STP convergence times typically run on the order of 45 to 60 seconds. IRF convergence times in production may be higher as well, although by a far smaller amount than with STP. Figure 7: STP vs. IRF convergence times
11 Page11 Network Test also compared IRF with rapid spanning tree, the newer and faster mechanism described in IEEE specification 802.1w. While RSTP converges much faster than STP, it s still no match for IRF in recovering from network outages (see Figure 8). Figure 8: RSTP vs. IRF convergence times As with STP, the convergence times measured for RSTP were substantially lower than those typically seen on production networks, perhaps due to the small number of links involved. In production settings, RSTP convergence often takes between 1 and 3 seconds following a link or component failure. The final test compared VRRP and IRF convergence times, with the HP and HP 5280 both configured in layer-3 mode. In this case, both VRRP and IRF present a single IP address to other devices in the network, and this address migrates to a secondary system when a failure occurs. Here again, IRF easily outpaced VRRP when recomputing paths after a network failure (see Figure 9). VRRP took between 1.9 and 2.2 seconds to converge, compared with times in the single milliseconds or less for IRF.
12 Page12 Figure 9: VRRP vs. IRF convergence times Conclusion These test results validate IRF s benefits in the areas of network design, performance, and reliability. IRF simplifies network architectures in campus networks and data centers by combining multiple physical switches and presenting them as a single logical fabric to the rest of the network. This approach results in far faster transfer times for virtual machines using VMware vmotion. Performance testing also shows that IRF nearly doubles available bandwidth by virtue of its active/active design, compared with active/passive designs that tie up switch ports for redundancy. And the results also show huge improvements in convergence times following network failures, both in layer-2 and layer-3 modes, enhancing reliability and improving application performance.
13 Page13 Appendix A: About Network Test Network Test is an independent third-party test lab and engineering services consultancy. Our core competencies are performance, security, and conformance assessment of networking equipment and live networks. Our clients include equipment manufacturers, large enterprises, service providers, industry consortia, and trade publications. Appendix B: Software Releases Tested This appendix describes the software versions used on the test bed. Testing was conducted in July and August 2011 at HP s facilities in Littleton, MA, and Cupertino, CA, USA. Component Version HP , Release 1335P03 HP , Release 1211 VMware vsphere 4.1 Update 1 Spirent TestCenter Appendix C: Disclaimer Network Test Inc. has made every attempt to ensure that all test procedures were conducted with the utmost precision and accuracy, but acknowledges that errors do occur. Network Test Inc. shall not be held liable for damages that may result from the use of information contained in this document. All trademarks mentioned in this document are property of their respective owners. Version Copyright 2011 Network Test Inc. All rights reserved. Network Test Inc Via Colinas, Suite 113 Westlake Village, CA USA
Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper
Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results May 1, 2009 Executive Summary Juniper Networks commissioned Network Test to assess interoperability between its EX4200 and EX8208
Technical white paper Best practices when deploying VMware vsphere 5.0 connected to HP Networking Switches Table of contents Executive summary 2 Overview 2 VMware ESXi 5 2 HP Networking 3 Link aggregation
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Five Pillars: Assessing the Cisco Catalyst 4948E for Data Center Service August 2010 Page2 Contents Executive Summary... 3 Features and Manageability... 3 Fast Convergence With Flex Link... 4 Control Plane
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
TECHNICAL BRIEF Networking and High Availability Deployment Note Imperva appliances support a broad array of deployment options, enabling seamless integration into any data center environment. can be configured
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
yeah SecureSphere Deployment Note Networking and High Availability Imperva SecureSphere appliances support a broad array of deployment options, enabling seamless integration into any data center environment.
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster
Virtualized Disaster Recovery from VMware and Vision Solutions Cost-efficient, dependable solutions for virtualized disaster recovery and business continuity Cost-efficient, dependable solutions for virtualized
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center Expect enhancements in performance, simplicity, and agility when deploying Oracle Virtual Networking in the data center. ORACLE
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability
SN0054584-00 A Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Information
DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores
A Link Load Balancing Solution for Multi-Homed Networks Overview An increasing number of enterprises are using the Internet for delivering mission-critical content and applications. By maintaining only
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Veritas InfoScale Availability Delivers high availability and disaster recovery for your critical applications Overview protects your most important applications from planned and unplanned downtime. InfoScale
white paper Best Practices for Implementing iscsi Storage in a Virtual Server Environment Server virtualization is becoming a no-brainer for any that runs more than one application on servers. Nowadays,
Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
A DeepStorage.net Labs Validation Report Over the past few years organizations have been adopting server virtualization to reduce capital expenditures by consolidating multiple virtual servers onto a single
IBM Systems and Technology Thought Leadership White Paper May 2011 Cloud-ready network architecture 2 Cloud-ready network architecture Contents 3 High bandwidth with low latency 4 Converged communications
Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1 Jul 2013 D E P L O Y M E N T A N D T E C H N I C A L C O N S I D E R A T I O N S G U I D E Table of Contents Introduction... 3 VMware
Matěj Grégr RESILIENT NETWORK DESIGN 1/36 2011 Brno University of Technology, Faculty of Information Technology, Matěj Grégr, email@example.com Campus Best Practices - Resilient network design Campus
Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.
Benefits of the Hyper-V Cloud For more information, please contact: Email: firstname.lastname@example.org Ph: 888-821-7888 Canadian Web Hosting (www.canadianwebhosting.com) is an independent company, hereinafter
October 2010 Consolidating Multiple s Space and power are major concerns for enterprises and carriers. There is therefore focus on consolidating the number of physical servers in data centers. Application
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
HP FlexNetwork and IPv6 Miha Petrač ESSN HP Networking presales 25.1.2012 2011Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Magic Quadrant:
Anritsu Network Solutions Voice Over IP Application Note MultiFlow 5048 CALL Manager Serv # 10.100.27 255.255.2 IP address 10.100.27.4 OC-48 Link 255 255 25 IP add Introduction Voice communications over
Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate
EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend
OKTOBER 2010 CONSOLIDATING MULTIPLE NETWORK APPLIANCES It is possible to consolidate multiple network appliances into a single server using intelligent flow distribution, data sharing and virtualization
Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small
Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster
Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Justin Venezia Senior Solution Architect Paul Pindell Senior Solution Architect Contents The Challenge 3 What is a hyper-converged
Top 10 Reasons to Virtualize VMware Zimbra Collaboration Server with VMware vsphere white PAPER Email outages disrupt a company s ability to conduct business. Issues as diverse as scheduled downtime, human
The Value of Open vswitch, Fabric Connect and Fabric Attach in Enterprise Data Centers Table of Contents Enter Avaya Fabric Connect. 2 A typical data center architecture with Avaya SDN Fx... 3 A new way:
What s New in VMware Site Recovery Manager 6.1 Technical Overview AUGUST 2015 Table of Contents Introduction... 2 Storage profile based protection... 2 Stretched Storage and Orchestrated vmotion... 5 Enhanced
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
CA ARCserve Replication and CA ARCserve High Availability r16 CA ARCserve Replication and CA ARCserve High Availability Deployment Options for Microsoft Hyper-V Server TYPICALLY, IT COST REDUCTION INITIATIVES
WHITE PAPER Citrix XenApp High Availability for Citrix XenApp Enhancing XenApp Availability with NetScaler Reference Architecture www.citrix.com Contents Contents... 2 Introduction... 3 Desktop Availability...
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Addressing Scaling Challenges in the Data Center DELL PowerConnect J-Series Virtual Chassis Solution A Dell Technical White Paper Dell Juniper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY
WHITE PAPER www.brocade.com IP Network Multi-Chassis Trunking for Resilient and High-Performance Network Architectures Multi-Chassis Trunking is a key Brocade technology in the Brocade One architecture
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware
White Paper Consolidate and Virtualize Your Windows Environment with NetApp and VMware Sachin Chheda, NetApp and Gaetan Castelein, VMware October 2009 WP-7086-1009 TABLE OF CONTENTS 1 EXECUTIVE SUMMARY...
Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Internet SCSI (iscsi)
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Technical white paper Building Virtualization-Optimized Data Center Networks HP s Advanced Networking Solutions Meet the Stringent Performance, Scalability and Agility Demands of the new Virtualized Data
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
White Paper Meraki Stacking OCTOBER 2015 This document describes the benefits of Meraki Stacking technology and how it can be used to manage a distributed network. In addition, this document will cover
Application Note Achieving Zero Downtime and Accelerating Performance for WordPress Executive Summary WordPress is the world s most popular open source website content management system (CMS). As usage
overview Virtualization within the IT environment helps you make more efficient use of existing software and hardware resources. You can use popular virtualization software to create VMs (virtual machines)
Leased Line + Remote Dial-in connectivity Client: One of the TELCO offices in a Southern state. The customer wanted to establish WAN Connectivity between central location and 10 remote locations. The customer
White Paper Optimizing the Virtual Data Center with Data Path Pools EMC PowerPath/VE By Bob Laliberte February, 2011 This ESG White Paper was commissioned by EMC and is distributed under license from ESG.
Technical Paper Leveraging VMware Software to Provide Failover Protection for the Platform for SAS Business Analytics April 2011 Table of Contents About this Document... 3 Introduction... 4 What is Failover?...
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
EMC Data Domain Boost and Dynamic Interface Groups Maximize the Efficiency of Multiple Network Interfaces ABSTRACT EMC delivers dynamic interface groups to simplify the use of multiple network interfaces