1 Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior Lab Analyst Executive Summary Organizations of all sizes are constantly looking for ways do more with less. With IT spending budgets limited and data growing at rapid rates, it s important for organizations to address the performance impacts of network virtualization overlays introduced in complex virtualization environments. QLogic looks to alleviate these concerns by offloading tunneling encapsulation from the host processor with the QLogic FastLinQ 3400 Series 10Gb Ethernet (10GbE) Intelligent Ethernet Adapters and the QLogic FastLinQ Series 25GbE Intelligent Ethernet Adapters. ESG Lab tested the performance benefits of offloading tunneling for network virtualization overlays using IxChariot, an application performance and network assessment tool from Ixia. With tunneling offload, the FastLinQ 3400 Series 10GbE Adapter delivered an average 19% improvement in throughput. Offloading tunneling encapsulation from the host enabled the host to operate 6% more efficiently. Likewise, the FastLinQ Series 25GbE Adapter delivered an average 18% increase in throughput. More impressively, by offloading the host, applications could transfer 55% more data per unit of host CPU work. These gains in efficiency enable QLogic to deliver on the IT mission of doing more with less. The Challenges In a recent ESG survey, respondents were asked to identify their top IT priorities for As they have been since 2010, increased use of server virtualization (cited by of respondents for 2016), increased use of desktop virtualization (), improved backup and recovery (), and business continuity and disaster recovery (18%) are all in the top ten most frequently cited IT priorities. 1 These priorities place significant performance demands on the network infrastructure. As a result, more than one quarter of those surveyed (27%) cited network performance, and 21% cited difficulties and/or delays in provisioning network services for new or updated applications and/or VM mobility as their biggest network challenges (see Figure 1). 2 Network performance and VM mobility become more important as organizations transition to the modern data center, incorporating lessons learned from hyper-scale and public cloud environments, including highly virtualized multi-tenant infrastructures that span geographic locations. With more and more VMs and more and more tenants, network infrastructures quickly scale beyond the limits of traditional network overlay technologies such as VLANs. 1 Source: ESG Research Report, 2016 IT Spending Intentions Survey, February Source: ESG Research Report, Data Center Networking Trends, February This ESG Lab Review was commissioned by QLogic and is distributed under license from ESG.
2 Lab Review: Accelerating Network Virtualization Overlays with QLogic Ethernet Adapters 2 Figure 1. Top-Ten Biggest Networking Challenges In your opinion, what are the biggest challenges facing your organization s networking team? (Percent of respondents, N=306, five responses accepted) Meeting budget constraints Challenges implementing security within the network Integration and cooperation between network operations and other IT domains Providing network performance 27% 29% 35% 38% Difficulty in recruiting new employees with the right skill sets Lack of coordination/cooperation with other IT functional teams Difficulties/delays in provisioning network services for new/updated applications and/or VM mobility Lack of knowledge about new technologies like SDN, white box switching and network virtualization Inability to implement new technology in a timely manner due to a lack of maintenance windows Inability to implement new technology due to poor coordination between IT infrastructure teams 23% 21% 21% 21% 19% 19% The Solution: Overlay Tunneling with QLogic Intelligent Ethernet Adapters VMs can be migrated among any servers on the same subnet, and VLANs are used to provide separation and isolation between different groups of VMs and servers. However, there are only 4,096 possible VLAN IDs, which limits the utility of VLANs to just the smallest environments. Tunneling the transmission of packets that are encapsulated within an outer-header also provides separation and isolation. As with VLANs, tunneling can be used to create a virtual L2 network. NVGRE encapsulates an L2 packet inside an IP packet, and is used by Microsoft Windows Server 2012 R2 Hyper-V. VXLAN encapsulates an L2 packet inside a UDP packet, and is used by VMware as well as Linux-based hypervisors such as KVM and Xen. The benefits of using tunneling instead of VLANs include: Scalability Tunneling can support up to 16 million separate IDs compared with only 4,069 for VLANs. Shared infrastructure Different tenants may use the same MAC addresses and VLANs while maintaining separation and isolation with tunneling. Reduced impact on network nodes Network nodes no longer need to be aware of each VM s MAC address, shrinking the size of MAC/IP tables, which can be a critical scaling factor in modern leaf-spine network architectures. Simplified networking management and migration Tunneling decouples logical addresses (used in the inner headers by the VMs) from physical addresses (used in the outer headers by network nodes for switching and routing), providing location-independent addressing. Improved resource utilization VLAN uses spanning tree protocol (STP) to avoid network loops, which can block up to half of the available network bandwidth. VXLAN uses L3 equal cost multi-path routing (ECMP) to use all available network paths.
3 Lab Review: Accelerating Network Virtualization Overlays with QLogic Ethernet Adapters 3 In a fully virtualized network, traditional network adapters cannot use their stateless offload features due to the added encapsulation of NVGRE or VXLAN. As a result, the hypervisor processes the tunneling traffic, which can exhaust system resources and can significantly affect the performance of the entire system. The QLogic FastLinQ 3400 Series 10GbE Adapters and the FastLinQ Series 25GbE Adapters provide tunneling offload, accelerating performance and reducing CPU load. Performance gains come from: Transmit large segment offload (LSO)/TCP segment offload (TSO) The Ethernet adapter segments large data blocks into network packets, encapsulates the packets, and transmits the packets without the participation of the host CPU. Receive side scaling (RSS)/transmit side scaling (TSS) The Ethernet adapter hashes the header of the encapsulated data and routes the packet to one of many queues, allowing traffic to spread across multiple CPU cores for load balancing and scaling. Large receive offload (LRO) The Ethernet adapter receives and de-encapsulates multiple packets, reassembling data into large blocks, without the participation of the host CPU (25Gbps Ethernet adapters only). Figure 2. QLogic FastLinQ 3400 Series 10GbE and FastLinQ Series 25GbE Intelligent Ethernet Adapters 3 QLogic 10GbE and 25GbE tunneling offload adapters are available in PCIe network interface card (NIC), rack network daughter card (NDC), blade NDC, and blade mezzanine form factors with one (10GbE adapters only) or two ports to meet most modern data center needs. Using these adapters in the modern fully virtualized data center brings the following benefits: Accelerated server performance Full line-rate 10Gbps and 25Gbps throughput across all ports with full hardware offload for iscsi and Fibre Channel over Ethernet (FCoE) storage traffic, along with tunneling offload (VxLAN, NVGRE and GENEVE) for enhanced throughput and efficiency in the modern data center. 4 Increased virtualization CPU scaling Full support for network virtualization technologies including switch independent NIC partitioning (NPAR), single root I/O virtualization (SR-IOV), VMware NetQueue, and Microsoft VMQ. Enhanced deployment and management Unifies adapter and storage management with integrated lifecycle controller management and QLogic QConvergeConsole(QCC) / QLogic Control Suite (QCS). 3 For a complete list of QLogic Intelligent Ethernet adapters, please refer to 4 iscsi and FCoE offload is available on select QLogic Ethernet adapters. Please check specifications for details.
4 Lab Review: Accelerating Network Virtualization Overlays with QLogic Ethernet Adapters 4 Validating Tunneling Acceleration ESG Lab performed hands-on testing of FastLinQ 3400 Series 10GbE Adapters and FastLinQ Series 25GbE Adapters, focusing on the performance gains that result from offloading tunneling. The test bed, shown in Figure 3, consisted of two VMware servers, each installed with its own QLogic Ethernet adapter. For 10Gbps testing, we used the QLogic QLE3442-SR Dual-port 10GbE Adapter, and for 25Gbps Ethernet testing, we used the QLogic QL45212 Dual-port 25GbE Adapter. We connected one high-speed port (10Gbps or 25Gbps) on each server to a 10Gbps/25Gbps Ethernet switch. The test bed environment also included an additional server and a workstation for management and test control. The environment was managed through a separate 1Gbps Ethernet management network with its own 1Gbps Ethernet switch. Figure 3. ESG Lab Test Bed QLogic FastLinQ GbE Intelligent Ethernet Adapters ESG Lab began by testing the QLogic QLE3442-SR Dual-port 10GbE Adapter. Network traffic was simulated using IxChariot, an application performance and network assessment tool from Ixia. IxChariot simulates real-world applications, enabling users to measure device, system, and network performance under realistic conditions. Each VMware server was configured with four VMs. Each of the VMs ran two threads of IxChariot. Thus, for unidirectional tests, there were a total of eight threads moving traffic. For bidirectional simultaneous transmit and receive tests, there were a total of sixteen threads moving traffic eight threads for each direction. We first ran unidirectional tests, measuring the maximum amount of data transferred in one direction. Each test used a different data I/O size, starting at 1 kilobyte (kb) and progressing through,,,,, and finally ending at. Either the hypervisor or the tunneling offload engine was responsible for segmenting the large I/O blocks into network packets, encapsulating the packets, and transmitting the packets to their destination. We ran each set of tests twice, first without tunneling offload, and then a second time, with tunneling offloaded from the host CPU to the QLE3442. As shown in Figure 4, at every data block size, the QLE3442 delivered more throughput when tunneling was offloaded to the Ethernet adapter. The most dramatic results were delivered with and I/O blocks, demonstrating the value of transmit large segment offload (LSO) and TCP segment offload (TSO).
5 % Throughput Improvement Lab Review: Accelerating Network Virtualization Overlays with QLogic Ethernet Adapters 5 Figure 4. QLogic QLE Gbps Simultaneous Transmit and Receive Throughput QLogic QLE Gbps Simultaneous Transmit & Receive Throughput Throughput Gbps No Offload Offload The relative performance gains for each I/O size are shown in Figure 5. The smallest gains were realized with data blocks (7%), while the largest gains were realized with data blocks (32%). This is to be expected, as more work is required to segment a large block into packets. On average, offloading tunneling resulted in a 19% improvement in the amount of data transferred. Figure 5. QLogic QLE Gbps Simultaneous Transmit and Receive Offload Throughput Improvement QLogic QLE Gbps Simultaneous Transmit & Receive Offload Throughput Improvement 40% 32% 30% 26% 25% 25% 15% 10% 7% 8% 0%
6 Throughput Mbps / CPU % Improvement Throughput Mbps / CPU Utilization % Lab Review: Accelerating Network Virtualization Overlays with QLogic Ethernet Adapters 6 In addition to increasing the amount of data that can be transferred, tunneling offload increases efficiency by reducing the overall load on the host processor. We computed the amount of data that can be transferred for each unit of CPU work, measured as Mbps/CPU %. As shown in Figure 6, the QLE3442 was able to transfer more data per unit of CPU work for almost every I/O block size. Figure 6. QLogic QLE Gbps Simultaneous Transmit and Receive Offload Efficiency QLogic QLE Gbps Simultaneous Transmit & Receive Offload Efficiency (Throughput/CPU Utilization) Offload No Offload Next, we calculated the percentage improvement in efficiency achieved through tunneling offload. The largest gains were obtained at I/O block size. On average, the QLE3442 delivered a 6% improvement in efficiency, as shown in Figure 7. Figure 7. QLogic QLE Gbps Simultaneous Transmit and Receive Offload Efficiency Improvement QLogic QLE Gbps Simultaneous Transmit & Receive Offload Efficiency Improvement 18.6% 12.9% 10% 8.1% 8.3% 0% 1.2% 3.9% -10% -9.5%
7 Lab Review: Accelerating Network Virtualization Overlays with QLogic Ethernet Adapters 7 QLogic FastLinQ Series 25GbE Intelligent Ethernet Adapters Next, ESG Lab repeated the entire test sequence using QLogic QL GbE Dual-port Adapters. The amount of data transferred is shown in Figure 8. As with the 10GbE adapters, the QL45212 delivered improved throughput with tunneling offload. When the host CPU encapsulated the data, the host was able to sustain about 38 Gbps. When offloading encapsulation to the NIC, the system was able to sustain between 43 and 46 Gbps. Figure 8. QLogic FastLinQ QL Gbps Simultaneous Transmit and Receive Throughput QLogic FastLinQ QL Gbps Simultaneous Transmit & Receive Throughput Throughput Gbps No Offload Offload We computed the percentage increase in throughput, as shown in Figure 9. On average, the QL45212 tunneling offload delivered an 18% increase in throughput.
8 CPU Utilization Improvement % Throughput Improvement % Lab Review: Accelerating Network Virtualization Overlays with QLogic Ethernet Adapters 8 Figure 9. QLogic FastLinQ QL Gbps Simultaneous Transmit and Receive Offload Throughput Improvement 30% QLogic FastLinQ QL Gbps Simultaneous Transmit & Receive Offload Throughput Improvement 17% 14% 18% 21% 17% 10% 0% Next, we calculated the reduction in host CPU utilization with tunneling offload, as shown in Figure 10. On average, the QL45212 delivered a 24% reduction in host CPU utilization. Figure 10. QLogic FastLinQ QL Gbps Simultaneous Transmit and Receive Offload CPU Utilization Improvement 30% 25% QLogic FastLinQ QL Gbps Simultaneous Transmit & Receive Offload CPU Utilization Improvement 24% 24% 24% 25% 26% 28% 15% 14% 10% 5% 0% Finally, we reviewed the efficiency gains obtained through tunneling offload. As expected, the maximum gains were obtained at large block sizes, and on average, the QL45212 delivered a 55% improvement in efficiency, as measured by Gbps/CPU utilization.
9 Throughput Gbps/CPU Utilization % Lab Review: Accelerating Network Virtualization Overlays with QLogic Ethernet Adapters 9 Figure 11. QLogic FastLinQ QL Gbps Simultaneous Transmit and Receive Efficiency QLogic FastLinQ QL Gbps Simultaneous Transmit & Receive Efficiency (Throughput/CPU Utilization) Offload No Offload Why This Matters Operating more efficiently is an ongoing mission for IT in any organization. In a perfect world, IT would spend significantly less, achieve higher levels of performance, and use fewer resources, while continuing to gain a competitive advantage. 10Gbps and 25Gbps Ethernet enables organizations to operate at maximum throughput, but using this resource has led to host processor bottlenecks. The easy response is to just buy more, but that is costly and not a sustainable business model. ESG Lab validated that the QLogic FastLinQ 3400 Series 10GbE and FastLinQ Series 25GbE Intelligent Ethernet Adapters accelerate throughput and reduce host CPU utilization by offloading tunneling from the host CPU to the network adapter. With tunneling offload, the FastLinQ 3400 Series 10GbE Adapter delivered an average 19% improvement in throughput. The offloading of work enabled the host to operate 6% more efficiently. Likewise, the FastLinQ Series 25GbE Adapter delivered an average 18% increase in throughput. More impressively, by offloading the host, applications could transfer 55% more data per unit of host CPU work. These gains in efficiency enable QLogic to deliver on the IT mission of doing more with less.
10 The Bigger Truth Lab Review: Accelerating Network Virtualization Overlays with QLogic Ethernet Adapters 10 IT organizations are taking to heart the lessons of the cloud- and hyper-scale data centers in driving efficiencies. As a result, surveyed organizations ranked data center modernization third in the list of CIO whiteboard initiatives, right behind cybersecurity and data analytics. 5 However, the modern data center with a fully virtualized infrastructure presents multiple networking challenges to IT organizations, including VM isolation, performance, and scale. Legacy Ethernet adapters, which cannot offload modern tunneling protocols, are limited in their ability to deliver maximum throughput or improve the efficiency of the data center. QLogic offers 10GbE and 25GbE adapters with tunneling offload. Performance and efficiency gains of the QLogic FastLinQ 3400 Series 10GbE and FastLinQ Series 25GbE Intelligent Ethernet Adapters come from transmit large segment offload (LSO) and TCP segment offload (TSO), along with receive side scaling (RSS), transmit side scaling (TSS), and large receive offload (LRO 25GbE adapters only). Using LSO, TSO, RSS, TSS, and LRO, the adapters increase throughput and reduce host CPU load, improving the efficiency of the virtualized server while maximizing the scalability of the data center. ESG Lab validated the performance and efficiency gains of the QLogic Ethernet adapters by comparing tunneling offload performance with tunneling encapsulation performed by the host CPU. The FastLinQ 3400 Series, at 10GbE, delivered an average 19% increase in throughput and a 6% increase in efficiency, as measured by the amount of data transferred per unit of work by the host CPU. The FastLinQ Series 25GbE Adapters also delivered an 18% increase in throughput. More impressively, the 25GbE adapter enabled the host to operate 55% more efficiently on average. The business world runs at the fastest speed possible and therefore, your organization should, too. Deploying QLogic 10Gbps and 25Gbps Ethernet with tunneling offload, organizations can now operate at the speed of the business without having to worry about impacting the performance of the underlying virtualization infrastructure. If you are looking to modernize your infrastructure with forward-thinking technology that enables higher levels of efficiency with more throughput, ESG Lab suggests checking out QLogic 10GbE and 25GbE networking solutions with tunneling offload. All trademark names are property of their respective companies. Information contained in this publication has been obtained by sources The Enterprise Strategy Group (ESG) considers to be reliable but is not warranted by ESG. This publication may contain opinions of ESG, which are subject to change. This publication is copyrighted by The Enterprise Strategy Group, Inc. Any reproduction or redistribution of this publication, in whole or in part, whether in hard-copy format, electronically, or otherwise to persons not authorized to receive it, without the express consent of The Enterprise Strategy Group, Inc., is in violation of U.S. copyright law and will be subject to an action for civil damages and, if applicable, criminal prosecution. Should you have any questions, please contact ESG Client Relations at The goal of ESG Lab reports is to educate IT professionals about data center technology products for companies of all types and sizes. ESG Lab reports are not meant to replace the evaluation process that should be conducted before making purchasing decisions, but rather to provide insight into these emerging technologies. Our objective is to go over some of the more valuable feature/functions of products, show how they can be used to solve real customer problems and identify any areas needing improvement. ESG Lab's expert third-party perspective is based on our own hands-on testing as well as on interviews with customers who use these products in production environments. 5 Source: ESG Research Report, 2016 IT Spending Intentions Survey, February P
WHI TE PA P ER Virtual Network Exceleration OCe14000 Ethernet Network Adapters High Performance Networking for Enterprise Virtualization and the Cloud Emulex OCe14000 Ethernet Network Adapters High Performance
White Paper Advanced Server Network Virtualization (NV) Acceleration for VXLAN August 2012 Overview In today's cloud-scale networks, multiple organizations share the same physical infrastructure. Utilizing
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
W h i t e p a p e r NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (firstname.lastname@example.org) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (email@example.com) Senior Architect, Limelight
Where IT perceptions are reality Industry Brief Renaissance in VM Network Connectivity Featuring An approach to network design that starts with the server Document # INDUSTRY2015005 v4, July 2015 Copyright
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
ESG Lab Review EMC MPFSi: Accelerating Network Attached Storage with iscsi A Product Review by ESG Lab May 2006 Authors: Tony Asaro Brian Garrett Copyright 2006, Enterprise Strategy Group, Inc. All Rights
W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized
VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
White Paper Building Next Generation Data Centers Implications for I/O Strategies By Bob Laliberte, Senior Analyst August 2014 This ESG White Paper was commissioned by Emulex and is distributed under license
Using Network Virtualization to Scale Data Centers Synopsys Santa Clara, CA USA November 2014 1 About Synopsys FY 2014 (Target) $2.055-2.065B* 9,225 Employees ~4,911 Masters / PhD Degrees ~2,248 Patents
Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments As the value and volume of business data continues to rise, small/medium/ enterprise (SME) businesses need high-performance Network Attached
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
White Paper Doubling the I/O Performance of VMware vsphere 4.1 with Broadcom 10GbE iscsi HBA Technology This document describes the doubling of the I/O performance of vsphere 4.1 by using Broadcom 10GbE
White Paper The Rise of Network Functions Virtualization Implications for I/O Strategies in Service Provider Environments By Bob Laliberte, Senior Analyst August 2014 This ESG White Paper was commissioned
ESG Lab Review Symantec OpenStorage Date: February 2010 Author: Tony Palmer, Senior ESG Lab Engineer Abstract: This ESG Lab review documents hands-on testing of consolidated management and automated data
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
ESG Lab Review Fast and Efficient Network Load Balancing from KEMP Technologies Date: February 2016 Author: Tony Palmer, Senior Lab Analyst, Vinny Choinski, Senior Lab Analyst, and Jack Poller, Lab Analyst
White Paper Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th As the deployment of bandwidth-intensive applications such as public and private cloud computing continues to increase, IT administrators
White Paper The Converged Network By Bob Laliberte November, 2009 2009, Enterprise Strategy Group, Inc. All Rights Reserved White Paper: The Converged Network 2 Contents Introduction... 3 What s Needed...
Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software
ESG Lab Review Performance Analysis: Scale-Out File Server Cluster with Windows Server 2012 R2 Date: December 2014 Author: Mike Leone, ESG Lab Analyst Abstract: This ESG Lab review documents the storage
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
A DeepStorage.net Labs Validation Report Over the past few years organizations have been adopting server virtualization to reduce capital expenditures by consolidating multiple virtual servers onto a single
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
White Paper Optimizing the Virtual Data Center with Data Path Pools EMC PowerPath/VE By Bob Laliberte February, 2011 This ESG White Paper was commissioned by EMC and is distributed under license from ESG.
White Paper The Modern Network Monitoring Mandate By Bob Laliberte, Senior Analyst April 2014 This ESG White Paper was commissioned by Emulex and is distributed under license from ESG. White Paper: The
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Pluribus Netvisor Solution Brief Freedom Architecture Overview The Pluribus Freedom architecture presents a unique combination of switch, compute, storage and bare- metal hypervisor OS technologies, and
Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding
Enterprise Strategy Group Getting to the bigger truth. White Paper Direct Scale-out Flash Storage: Data Path Evolution for the Flash Storage Era Apeiron introduces NVMe-based storage innovation designed
White Paper Getting on the Road to SDN Attacking DMZ Security Issues with Advanced Networking Solutions By Bob Laliberte, Senior Analyst March 2014 This ESG White Paper was commissioned by NEC and is distributed
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
I/O Virtualization The Next Virtualization Frontier Dennis Martin President Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage infrastructure
IBM Systems and Technology Thought Leadership White Paper May 2011 Cloud-ready network architecture 2 Cloud-ready network architecture Contents 3 High bandwidth with low latency 4 Converged communications
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments Increase Virtualization Density and Eliminate I/O Bottlenecks with QLogic High-Speed Interconnects Key Findings Support for increased workloads,
WHITE PAPER IxChariot Virtualization Performance Test Plan Test Methodologies The following test plan gives a brief overview of the trend toward virtualization, and how IxChariot can be used to validate
ESG Lab Review CloudByte ElastiStor Date: February 2014 Author: Tony Palmer, Senior Lab Analyst Abstract: This ESG Lab review documents the hands-on evaluation of CloudByte ElastiStor unified storage on
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
Boosting Data Transfer with TCP Offload Engine Technology on Ninth-Generation Dell PowerEdge Servers TCP/IP Offload Engine () technology makes its debut in the ninth generation of Dell PowerEdge servers,
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
WHITE PAPER Accelerating Micro-segmentation THE INITIAL CHALLENGE WAS THAT TRADITIONAL SECURITY INFRASTRUCTURES WERE CONCERNED WITH SECURING THE NETWORK BORDER, OR EDGE, WITHOUT BUILDING IN EFFECTIVE SECURITY
Features Comparison: Hyper-V Server and Hyper-V February 2012 The information contained in this document relates to a pre-release product which may be substantially modified before it is commercially released.
Solving Scale and Mobility in the Data Center A New Simplified Approach Table of Contents Best Practice Data Center Design... 2 Traffic Flows, multi-tenancy and provisioning... 3 Edge device auto-attachment.4
Book Project Why I/O Is Strategic Software- defined Networking Date: April 2013 Author: Bob Laliberte, Senior Analyst What Is Software- defined Networking? The emergence of software- defined networking
Networking and Converged Networking Adapters for ThinkServer Product Guide The OCe14000 family of 10 Gb Ethernet Networking and Converged Networking Adapters for ThinkServer builds on the foundation of
Broadcom Ethernet Network Controller Enhanced Virtualization Functionality A Dell Technical White Paper Third party information brought to you, courtesy of Dell. THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of
CON8474 - Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Director, Product Management Oracle Virtual Networking Ronen Kofman Director of Product Development Oracle OpenStack
Data Center Virtualization and Cloud QA Expertise Highlights Broad Functional QA Experience Deep understanding of Switching and Routing Protocols Strong hands on experience in multiple hyper-visors like
Network Virtualization What is Network Virtualization? Abstraction of the physical network Support for multiple logical networks running on a common shared physical substrate A container of network services
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
Presenter: Vinit Jain, STSM, System Networking Development, IBM System & Technology Group A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio
Mobile Cloud Computing Lecture 02b Cloud Computing II 吳 秀 陽 Shiow-yang Wu T. Sridhar. Cloud Computing A Primer, Part 2: Infrastructure and Implementation Topics. The Internet Protocol Journal, Volume 12,
CloudEngine 1800V Virtual Switch CloudEngine 1800V Virtual Switch Product Overview Huawei CloudEngine 1800V (CE1800V) is a distributed Virtual Switch (vswitch) designed by Huawei for data center virtualization
FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,
Field Audit Report Asigra Hybrid Cloud Backup and Recovery Solutions By Brian Garrett with Tony Palmer May, 2009 Field Audit: Asigra Hybrid Cloud Backup and Recovery Solutions 2 Contents Introduction...
Lab Validation Report Unified Windows Storage Consolidation NetApp Windows Consolidation in Virtual Server Environments By Brian Garrett August 2010 Lab Validation: Unified Windows Storage Consolidation
Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate
Network Technologies for Next-generation Data Centers SDN-VE: Software Defined Networking for Virtual Environment Rami Cohen, IBM Haifa Research Lab September 2013 Data Center Network Defining and deploying
10Gb Ethernet Consolidate and Converge Intelligent Ethernet Adapters and Converged Network Adapters LAN Connectivity Optimized for Consolidation Replaces Multiple 1GbE ports with 10GbE port QLogic NPAR
Overview Are you looking to reduce network congestion and improve cable management at the servers in your HP ProLiant Gen8 and Gen9 environment? The is a dual-port 10GBASE-T adapter, featuring the 57810S