Virtual Switching in an Era of Advanced Edges
|
|
|
- Franklin Isaac Farmer
- 10 years ago
- Views:
Transcription
1 Virtual Switching in an Era of Advanced Edges Justin Pettit Jesse Gross Ben Pfaff Martin Casado Nicira Networks, Palo Alto, CA, Simon Crosby Citrix Systems, Ft. Lauderdale, FL, Abstract CPU virtualization has evolved rapidly over the last decade, but virtual networks have lagged behind. The greater context and ability to apply policies early make edge switching attractive. However, the perception of a lack of features and high CPU utilization can cause it to be prematurely dismissed. Advanced edge switches, such as Open vswitch, answer many of the shortcomings in simple hypervisor bridges. We revisit the role of edge switching in light of these new options that have capabilities formerly only available in high-end hardware switches. We find that edge switching provides an attractive solution in many environments. As features are added and the ability to offload more to hardware improves, these advanced edge switches will become even more compelling. I. INTRODUCTION Compute virtualization is having a profound effect on networking. It challenges the traditional architecture by creating a new access layer within the end-host, and by enabling end-host migration, joint tenancy of shared physical infrastructure, and non-disruptive failover between failure domains (such as different subnets). It also promises to enable new paradigms within networking by providing richer edge semantics, such as host movement events and address assignments. Traditional networking presents many adoption hurdles to compute virtualization. Classic IP does not support mobility in a scalable manner. Flat networks such as Ethernet do not scale without segmenting into multiple logical subnets, generally through VLANs. Network configuration state such as isolation, QoS, and security policies does not dynamically adapt to network topology changes. Thus, not only does virtual machine (VM) migration break network configuration, physical network policies still demand per-box configuration within cloud environments a far cry from the resource pool model which is so common in scale-out cloud deployments. New protocols such as Trill [1] address some of these issues, but still fail to meet all of the challenges posed by virtualization, such as managing end-to-end configuration state during mobility. Over the last few years, both academia and industry have proposed approaches to better adapt networks to virtual environments. In general, these proposals add additional context to packets, so that the network can begin enforcing forwarding and policy over the logical realm. Network architecture must change in two ways to accomplish this. First, network elements must be able to identify the logical context. This can be accomplished by adding or overloading a tag in a packet header to identify a virtual machine or a virtual network. Second, the network must adapt to changes in the virtualization layer, either reacting to changes inferred from network activity or to signals from the virtualization layer. The existing approaches to making these changes can be characterized by the network entity that does the first lookup over the logical context. In one approach, the end-host implements the virtual network control logic and the hypervisor forwards packets [2], [3], [4]. In the other approach, packets immediately pass to the first-hop switch, bypassing the end-host s networking logic [5], [6]. Both approaches can use advanced NIC (Network Interface Card) features for acceleration. The rationale for performing packet lookup and policy enforcement at the first hop switch is simple: switches already contain specialized hardware to enforce complex network functions at wire speeds. Thus, much of the recent public discourse and standardization efforts have focused on this approach. However, the increase in core count, processor speed, and the availability of NICs with on-board network logic, coupled with the flexibility of software makes switching at the end-host an attractive alternative. Yet, little has been written about the relative merits of this approach. In this paper, we draw on our experiences implementing and deploying Open vswitch [4] to present the case for edge-based switching. It is not our intention to argue that this is preferable to other approaches; there are good arguments on either side. Rather, we focus on the implications of virtual switch design, and discuss how developments in NIC technology will continue to address performance and cost concerns. We believe that the combination of software switching with hardware
2 Fig. 1. Structure of a virtual network. offload (either in the NIC or first-hop switch), strikes a practical balance between flexibility and performance. We construct the discussion as follows. The next section provides some background on the emergence of the virtual networking layer. Section III discusses switch design within the hypervisor, using Open vswitch as a concrete example. We then discuss cost and performance issues in Section IV. Section V looks at alternatives to edge switching, followed by speculation about future trends in Section VI. We conclude in Section VII. II. EVOLUTION OF THE EDGE On any particular virtualized server, the virtual network resources of VMs must share a limited number of physical network interfaces. In commodity hypervisors, an individual VM has one or more virtual network interface cards (VIFs). The network traffic on any given VIF is bridged by the hypervisor, using software or hardware or both, to a physical network interface (PIF). Figure 1 shows how physical and virtual network resources are related. The 80x86 hypervisors introduced in the late 1990s implemented simple network bridges in software [7]. These bridges had no network manageability features and no ability to apply policies to traffic. For this purpose, the most troublesome traffic was that between VMs on the same physical host. This traffic passed directly from VM to VM, without ever traveling over a physical wire, so network administrators had no opportunity to monitor or control it. This drawback was only a minor issue at the time, because the low degree of server consolidation meant that the complexity of virtual networks was limited. Since then, however, the number of VMs per server has increased greatly, and now 40 or more VMs on a host is not uncommon. At this scale, network and server administrators must have tools to manage and view virtual networks. This paper discusses the advanced edge switch approach to solving this virtual network management problem. This approach takes advantage of the unique insight available to a hypervisor bridge, since as a hypervisor component it can directly associate network packets with VMs and their configuration. These switches add features for visibility and control formerly found only in high-end enterprise switches. They increase visibility into inter-vm traffic through standard methods such as NetFlow and mirroring. Advanced edge switches also implement traffic policies to enforce security and qualityof-service requirements. To make management of numerous switches practical, advanced edge switches support centralized policy configuration. This allows administrators to manage a collection of bridges on separate hypervisors as if it were a single distributed virtual switch. Policies and configuration are centrally applied to virtual interfaces and migrate with their VMs. Advanced edge switches can use NIC features that improve performance and latency, such as TCP segmentation and checksum offloading now available even in commodity cards. More advanced technologies such as support for offloading GRE and IPsec tunnels are becoming available, while SR-IOV NICs allow VMs to directly send and receive packets. Advanced edge switches are making inroads in the latest generation of hypervisors, as the VMware vswitch, Cisco Nexus 1000V, and Open vswitch. The following section highlights Open vswitch, an open source virtual switch containing many of the advanced features that we feel define this new generation of edge switches. Overview III. OPEN VSWITCH Open vswitch is a multilayer virtual switch designed with flexibility and portability in mind. It supports the features required of an advanced edge switch: visibility into the flow of traffic through NetFlow, sflow, and mirroring (SPAN and RSPAN); fine-grained ACLs (Access Control Lists) and QoS (Quality-of-Service) policies over a 12-tuple of L2 through L4 headers; and support for centralized control. It also provides port bonding, GRE and IPsec tunneling, and per-vm traffic policing. The Open vswitch architecture has been previously described [8]. Since then, a configuration database has been introduced, which allows for the use of multiple simultaneous front-ends. The primary interfaces use JSON-RPC to communicate over local sockets on the system or to remote systems through a SSL tunnel. 1 We anticipate adding other front-ends, including an IOS-like CLI, SNMP, and NETCONF. 1 This remote interface will likely be standardized as the OpenFlow switch management protocol in the near future. Combined with the already defined datapath control protocol, the pair would comprise the OpenFlow protocol suite. 2
3 XAPI notifies Open vswitch when bridges should be created and interfaces should be attached to them. When it receives such a notification, Open vswitch queries XAPI to determine greater context about what is being requested. For example, when a virtual interface is attached to a bridge, it determines the VM associated with it. Open vswitch stores this information in its configuration database, which notifies any remote listeners, such as a central controller. Fig. 2. Open vswitch integration with XenServer. Each unprivileged virtual machine (DomU) has one or more virtual network interfaces (VIFs). VIFs communicate with the Open vswitch fast path kernel module ovs_mod running in the control VM (Dom0). The kernel module depends on the userspace ovs-vswitchd slow path, which also implements the OpenFlow and configuration database protocols. XAPI, the XenServer management stack, also runs in Dom0 userspace. Open vswitch is backwards-compatible with the Linux bridge, which many Linux-based hypervisors use for their edge switch. This allows it to be a drop-in replacement in many virtual environments. It is currently being used in deployments based on Xen, XenServer, KVM, and VirtualBox. The Xen Cloud Platform and upcoming versions of XenServer will ship with Open vswitch as the default switch. Open vswitch is also being ported to non-linux hypervisors and hardware switches, due to its commercialfriendly license, modular design, and increasing feature set. Integration with XenServer Open vswitch works seamlessly with XenServer, as shown in Figure 2. As mentioned earlier, future versions of XenServer will ship with Open vswitch as the default. It is also fully compatible with XenServer 5.6, the currently shipping version. XenServer is built around a programmatic interface known as XAPI [9] (XenServer API). XAPI is responsible for managing all aspects of a XenServer, including VMs, storage, pools, and networking. XAPI provides an external interface for configuring the system as well as a plug-in architecture that allows applications to be built around XenServer events. XAPI notifies Open vswitch of events that may be of interest. The most important of these are related to network configuration. Internal and external networks may be created at any time. External networks support additional configuration such as port bonding and VLANs. These networks are essentially learning domains for which virtual interfaces may be attached. Centralized Control A common component of advanced edge switches is the ability to be centrally managed. Open vswitch provides this through support of the OpenFlow [10] protocol suite. OpenFlow is an open standard that permits switch configuration and dataplane manipulation to be managed in a centralized manner. At Synergy 2010 [11], Citrix demonstrated an application that treats all the Open vswitches within a XenServer pool as a single distributed virtual switch. It supports the ability to implement policies over the visibility, ACL, and traffic policing support outlined earlier. These policies may be defined over pools, hypervisors, VMs, and virtual interfaces in a cascading manner. Appropriate policies are generated and attached to virtual interfaces. As VMs migrate around the network, the policies assigned to their virtual interfaces follow them. Due to the tight integration with XenServer, Open vswitch is able to work in tandem with the control application to enforce these policies. IV. IMPLICATIONS OF END-HOST SWITCHING In this section, we look at the implications of endhost switching. We focus on issues of cost, performance, visibility, and control associated with such an approach. Cost Advanced edge switching is primarily a software feature. Deploying it requires installing software in the hypervisor layer (assuming it is not installed by default), and upgrades require only a software update. Some deployments of advanced edge switches may benefit from purchasing and deploying advanced NICs to increase performance. We expect these NIC features to naturally migrate into lower-end hardware over time, as has happened with hardware checksumming and other features once considered advanced. As edge switches continue to add new features, some of these may not be initially supported in hardware, but the switch can fall back to software-only forwarding until a newer NIC is available. Otherwise, deployment of advanced edge switching has minimal hardware cost. Implementation of advanced 3
4 edge switching using a flow-based protocol, such as OpenFlow, requires adding an extra server to the network to act as the controller. Depending on deployment size, a VM may be suitable for use as a controller. Performance Edge switches have been demonstrated to be capable of 40 Gbps or higher switching performance [12]. However, raw forwarding rates for edge switches can be deceptive, since CPU cycles spent switching packets are CPU cycles that could be used elsewhere. An edge switch s greatest strength is in VM-to-VM traffic, which never needs to hit the wire. The communications path is defined by the memory interface, with its high bandwidth, low latency, and negligible error rate, rather than network capabilities. Throughput is higher, errors due to corruption, congestion, and link failure are avoided, and computing checksums is unnecessary. The CPU cost of switching at the edge can be minimized by offloading some or all of the work to the NIC. Already common techniques such as TCP segmentation offloading (TSO) provide some assistance, while newer methods such as SR-IOV allow virtual machines to directly access the hardware, completely avoiding any CPU load from switching. Future increases in performance and newer types of offloading will further diminish the impact of network processing on the end host. Section VI includes a discussion of future directions for NIC offloading. The edge s visibility into the source of virtual network traffic gives it the ability to affect traffic before it enters the network, protecting oversubscribed uplinks. It is particularly important to be able to throttle traffic for untrusted VMs, such as those in a commercial cloud environment. Visibility Traditional hypervisor switches provide network administrators very little insight into the traffic that flows through them. As a result, it is logical to use hardware switches for monitoring, but VM-to-VM traffic that does not pass over a physical wire cannot be tracked this way. Advanced edge switches provide monitoring tools without a need for high-end physical switches. For example, Open vswitch supports standard interfaces for monitoring tools, including NetFlow, sflow, and port mirroring (SPAN and RSPAN). Software switches can inspect a diverse set of packet headers in order to generate detailed statistics for nearly any protocol. Adding support for new types of traffic requires only a software update. Independent of the capabilities of different pieces of hardware and software, some locations are better suited to collecting information than others. The closer the measurement point is to the component being measured, the richer the information available, as once data has been aggregated or removed it cannot be recovered. An edge switch directly interacts with all virtual machines and can gather any data it needs without intermediate layers or inferences. In contrast, hardware switches must rely on a hypervisor component for certain types of data or go without that information. A common example is the switch ingress port: packets must be explicitly tagged with the port or else it must be inferred from the MAC address, a dangerous proposition given untrusted hosts. Control Managing edge switches is potentially a daunting task. Instead of simply configuring a group of core switches within the network, each edge becomes another network device that needs to be maintained. As mentioned earlier, advanced edge switches provide methods for remote configuration, which can make these disparate switches appear as a single distributed virtual switch. Given the visibility advantages of the edge described in the previous section, these switches are in a prime position to enforce network policy. The available rich sources of information give network administrators the ability to make fine-grained rules for handling traffic. In addition, by applying policy at the edge of the network it is possible to drop unwanted traffic as soon as possible, avoiding wasting the resources of additional network elements. Advanced edge switches run on general-purpose computing platforms with large amounts of memory, enabling them to support a very large number of ACL rules of any form. While complex sets of rules may have a performance impact, it is possible to tailor the expressiveness, throughput, and cost of a set of ACLs to the needs of a particular application. V. RELATED WORK Advanced edge switching solves the problem of separation of traffic on the virtual network from the policies of the physical network, by importing the network s policies into the virtual network. It is also possible to take the opposite approach: to export the virtual traffic into the physical network. The latter approach is exemplified by VEPA [13] (Virtual Ethernet Port Aggregator), which changes the virtual bridge to forward all traffic that originates from a VM to the adjacent bridge, that is, the first-hop physical switch. This is a simple change to existing bridges in hypervisors. 2 In VEPA, the adjacent bridge applies the physical network filtering and monitoring policies. When traffic between VMs in a single physical server is sent 2 A proposed patch to add VEPA support to the Linux kernel bridge added or changed only 154 lines of code [14]. 4
5 Transfer rate (Mbps) Transfer size (bytes) Fig. 3. Path of a packet from VM 1 to VM 2 in hairpin switching: (1) VM 1 sends the packet over a VIF to the hypervisor, (2) the hypervisor passes the packet to the adjacent bridge, (3) the bridge passes the packet back to the hypervisor, (4) the hypervisor delivers the packet to VM 2. Fig. 4. Throughput versus flow size in XenServer virtual machines on the same host with Open vswitch (solid) and VEPA (dotted). An advantage of edge switching is that the hypervisor does not send traffic off-box and can avoid the overhead of actually handling every packet twice. to the adjacent bridge, it then sends that traffic back to the server across the same logical link on which it arrived, as shown in Figure 3. This hairpin switching phenomenon is characteristic of the approach, so we adopt it here as the general term that includes VEPA and similar solutions, such as VN-Link from Cisco [6]. Sending packets off-box for processing allows the end-host to be simpler. Processing of packet headers is avoided and NICs can utilize low-cost components, since advanced functionality is handled by the adjacent bridge. If advanced NICs are used, SR-IOV makes moving packets from VMs to the adjacent bridge even more efficient by bypassing the hypervisor altogether. Without the use of a technology such as SR-IOV, VEPA will have similar performance to an edge switch for off-box traffic. However, VM-to-VM traffic will tend to be worse, since it must traverse the physical link. In addition to throughput limitations and potential congestion, performance may be affected by the need to effectively handle twice as many packets. Even though no actual switching happens in the hypervisor these packets still must be copied to and from the network card, have their existence signaled with interrupts, and incur other overhead associate with actually sending and receiving packets. Figures 4 and 5 show the effective throughput that Open vswitch achieves for various flow sizes versus VEPA on the same hardware. The measurements were executed using NetPIPE on Debian VMs within Citrix XenServer For the off-box test in Figure 5, Ubuntu Server was running on baremetal. VEPA support was based on the previously mentioned patch applied to the XenServer bridge source. The hairpin switch was a Broadcom Triumph-2 system with the MAC put into line loopback mode. All off-box communication was done over a 1 Gbps link. Hairpin switching reduces the number of switching el- Transfer rate (Mbps) Transfer size (bytes) Fig. 5. Throughput versus flow size in XenServer virtual machine and remote bare-metal host with Open vswitch (solid) and VEPA (dotted). Performance was roughly equivalent across all transfer sizes. ements within the network. In the absence of centralized configuration of edge switches, this eases configuration for the network administrator. It also makes configuration consistent for networks that use a single source for their equipment. Some hairpin solutions encapsulate frames sent to the first-hop switch with an additional tag header, e.g. VEPA provides the option of using S-Tag and VN-Link defines the new VNTag header. The tag provides additional context about the source of the packet, in particular the virtual ingress port. Without this information, the switch must rely on fields in the packet header, such as the source MAC address, which are easily spoofed by a malicious VM. Tagging protocols introduce new issues of their own. A tag distinguishes contexts, but it does not say anything about those contexts. Most network policies cannot be enforced accurately without more specific information, so extra data (port profiles) must be distributed and kept up-to-date over additional protocols. The form of this information is not yet standardized, so users may be tied to a single vendor. Besides this inherent issue, the specific proposals for tagging protocols have weaknesses of their own. Tags 5
6 in VNTag and S-Channel are only 12 bits wide [15], [13], which is not enough to uniquely identify a VM within a scope broader than a server rack. This limits its usefulness beyond the first-hop switch. S-Channel tagging additionally does not not support multicast or broadcast across tags, which can potentially increase the overhead of sending packets to multiple guests. Tags may also introduce problems with mobility, even across ports in a single switch. Hairpinning effectively turns an edge switch into an aggregation device. The first-hop switch must then scale to support an order of magnitude or more greater number of switch interfaces with features enabled. The hardware resources of top of rack switches, primarily designed to handle approximately 48 interfaces, may prove to be insufficient for the services the user wished to enable on a particular VM interface. Consider ACL rules for example: switch hardware is often constrained to about 4000 rules. This number may be adequate to define similar policies across most members of the network. However, it may become constraining if fine-grained policies are needed for each guest. For example, a switch with 48 physical ports and 20 VMs per port would allow roughly 4 rules per VM. Hairpin switching is an attractive solution in environments that need to apply similar policies over all clients in the network or enforce them in aggregate. Edge switches provide more flexibility and fine-grained control at the possible expense of end-host CPU cycles. Fortunately, the approaches are not mutually exclusive and most environments will likely find that a combination of the two provides the best solution. VI. FUTURE We predict that virtual networking as a field will continue to rapidly evolve as both software and hardware shift away from the model based on a more traditional physical network. We have already seen the beginning of this trend with software switches becoming more sophisticated. It will only accelerate as the line between the hardware switches used for physical networks and the software switches in virtual networks begins to blur. As the trend continues, virtual switches will add more features that are currently available only in higherend hardware switches. Current virtual switches support management tools such as NetFlow and ACLs. The next generation will add SNMP, NETCONF, a standard command line interface, and other features that will make software and hardware switches indistinguishable. Given the naturally faster pace of software development compared to hardware, parity will be reached before long. The question of edge networking versus switching at the first hop is often framed as a debate between software and hardware. In reality, the two issues are orthogonal. Hardware has long been an accepted component of software switches, via checksum offloading, TCP segmentation offloading, and other acceleration features of NICs that are now common even in low-end computers. These provide a good blend of flexibility and performance: the speed of hardware under the control of software at the edge of the network where there is the most context. Network cards are adding new types of offloading that are particularly suited to the types of workloads encountered with virtualization. Intel 10 GbE NICs [16], for example, can filter and sort packets into queues based on a variety of the L2, L3, and L4 header fields used by monitoring tools. Switch software can set up these programmable filters to accelerate forwarding taking place in the hypervisor. While it is unlikely that all traffic can be handled in hardware due to limitations in matching flexibility and the number of rules and queues, it is possible for the switch software to trade performance against flexibility as needed based on the application. The most active traffic flows can often be handled in hardware, leaving the software switch to deal with the edge cases. Network cards offered by Netronome [17] go a step further by offering a fully programmable network flow processing engine in the card. These allow the complete set of rules to be expressed and pushed down into the hardware. Along with I/O virtualization that allows virtual machines to directly access a slice of the physical hardware, all packet processing can be done without the help of the hypervisor. The NIC does the heavy lifting, but the hypervisor must still provide the management interfaces and knowledge of the system that is used to configure the hardware. As more advanced forms of offloading become available in networking cards, virtual edge switches will be able to balance the capabilities of software and hardware dynamically based on the application and available resources. These features become available in higher end network cards first, but they will trickle down over time. A hybrid stack of hardware and software aids in this process because it is possible to balance performance with cost and upgrade while maintaining a consistent control plane. Just as hardware is fusing with software at the edge of the network, platforms designed for virtualization are moving into the core. Several switch vendors are currently porting Open vswitch to their hardware platforms. This will provide a unified control interface throughout the network. Since the switches included in hypervisors have generally been custom-built for that platform, there was previously very little commonality with hardware switches. As the relationship between software edge 6
7 switches and physical first-hop switches is currently asymmetric, it is not a large leap to something like VEPA. However, with switches like Open vswitch and the Cisco Nexus 1000V providing a common management interface, the disadvantages of having very different switching abstractions are much greater. As long as any policy must be applied at the edge (as with untrusted tenants), it is better to be able to use a consistent set of management tools if at all possible. VII. CONCLUSION CPU virtualization has evolved rapidly over the last decade, but virtual networks have lagged behind. The switches that have long shipped in hypervisors were often little more than slightly modified learning bridges. They lacked even basic control and visibility features expected by network administrators. Advanced edge switches, such as Open vswitch, answer many of their shortcomings and provide features previously only available in high-end hardware switches. The introduction of these advanced edges changes the virtualized networking landscape. Having reached near parity with their hardware brethren, a fresh look at the relative merits of each approach is warranted. Hairpin switching can offer performance benefits, especially in situations with little VM-to-VM traffic and heavy policy requirements. Yet with their greater context, early enforcement of policies, and ability to be combined with other edges as a single distributed switch, the new generation of edge switches has many compelling features. Advances in NIC hardware will reduce the performance impact of edge switching making it appealing in even more environments. [8] B. Pfaff, J. Pettit, T. Koponen, K. Amidon, M. Casado, and S. Shenker, Extending networking into the virtualization layer, in HotNets, [9] Citrix Systems, Overview of the XenServer API, sdk.html#sdk_overview, May [10] OpenFlow Consortium, OpenFlow specification, version 1.0.0, Dec [11] Synergy. San Francisco, CA: Citrix Systems, May [12] VMware Inc., Achieve breakthrough performance with enterprise application deployment, solutions/business-critical-apps/performance.html, May [13] Hewlett-Packard Corp., IBM et al., Edge virtual bridge proposal, version 0, rev 0.1, IEEE, Tech. Rep., April [14] A. Fischer, net/bridge: add basic VEPA support, lwn.net/articles/337547/, June [15] J. Pelissier, Network interface virtualization review, Jan [16] Intel GbE contoller datasheet, download.intel.com/design/network/datashts/82599_ datasheet.pdf, April [17] Netronome Systems, Inc., Netronome network flow processors, May VIII. ACKNOWLEDGEMENTS The authors would like to thank Andrew Lambeth, Paul Fazzone, and Hao Zheng. REFERENCES [1] J. Touch and R. Perlman, Transparent Interconnection of Lots of Links (TRILL): Problem and applicability statement, RFC 5556, May [2] VMware Inc., Features of VMware vnetwork Virtual Distributed Switch, May [3] Cisco, Nexus 1000V Series Switches, en/us/products/ps9902, Jul [4] Nicira Networks, Open vswitch: An open virtual switch, openvswitch.org/, May [5] P. Congdon, Virtual Ethernet Port Aggregator Standards Body Discussion, new-congdon-vepa-1108-v01.pdf, Nov [6] Cisco VN-Link virtual machine aware networking, at_a_glance_c pdf, April [7] R. Malekzadeh, VMware for Linux Networking Support, support/networking.html,
Programmable Networking with Open vswitch
Programmable Networking with Open vswitch Jesse Gross LinuxCon September, 2013 2009 VMware Inc. All rights reserved Background: The Evolution of Data Centers Virtualization has created data center workloads
Virtual networking technologies at the server-network edge
Virtual networking technologies at the server-network edge Technology brief Introduction... 2 Virtual Ethernet Bridges... 2 Software-based VEBs Virtual Switches... 2 Hardware VEBs SR-IOV enabled NICs...
How To Make A Virtual Machine Aware Of A Network On A Physical Server
VMready Virtual Machine-Aware Networking White Paper Table of Contents Executive Summary... 2 Current Server Virtualization Environments... 3 Hypervisors... 3 Virtual Switches... 3 Leading Server Virtualization
Enhancing Cisco Networks with Gigamon // White Paper
Across the globe, many companies choose a Cisco switching architecture to service their physical and virtual networks for enterprise and data center operations. When implementing a large-scale Cisco network,
Network Performance Comparison of Multiple Virtual Machines
Network Performance Comparison of Multiple Virtual Machines Alexander Bogdanov 1 1 Institute forhigh-performance computing and the integrated systems, e-mail: [email protected], Saint-Petersburg, Russia
Network Virtualization and Software-defined Networking. Chris Wright and Thomas Graf Red Hat June 14, 2013
Network Virtualization and Software-defined Networking Chris Wright and Thomas Graf Red Hat June 14, 2013 Agenda Problem Statement Definitions Solutions She can't take much more of this, captain! Challenges
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Networking in the Era of Virtualization
SOLUTIONS WHITEPAPER Networking in the Era of Virtualization Compute virtualization has changed IT s expectations regarding the efficiency, cost, and provisioning speeds of new applications and services.
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
A Platform Built for Server Virtualization: Cisco Unified Computing System
A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Virtual Networking Features of the VMware vnetwork Distributed Switch and Cisco Nexus 1000V Series Switches
Virtual Networking Features of the vnetwork Distributed Switch and Cisco Nexus 1000V Series Switches What You Will Learn With the introduction of ESX, many virtualization administrators are managing virtual
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
The Impact of Virtualization on Cloud Networking Arista Networks Whitepaper
Virtualization takes IT by storm The Impact of Virtualization on Cloud Networking The adoption of virtualization in data centers creates the need for a new class of networking designed to support elastic
Network Virtualization
Network Virtualization What is Network Virtualization? Abstraction of the physical network Support for multiple logical networks running on a common shared physical substrate A container of network services
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure
W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
Open vswitch and the Intelligent Edge
Open vswitch and the Intelligent Edge Justin Pettit OpenStack 2014 Atlanta 2014 VMware Inc. All rights reserved. Hypervisor as Edge VM1 VM2 VM3 Open vswitch Hypervisor 2 An Intelligent Edge We view the
Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam
Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software
Visibility into the Cloud and Virtualized Data Center // White Paper
Executive Summary IT organizations today face unprecedented challenges. Internal business customers continue to demand rapid delivery of innovative services to respond to outside threats and opportunities.
vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Lecture 02b Cloud Computing II
Mobile Cloud Computing Lecture 02b Cloud Computing II 吳 秀 陽 Shiow-yang Wu T. Sridhar. Cloud Computing A Primer, Part 2: Infrastructure and Implementation Topics. The Internet Protocol Journal, Volume 12,
Linux KVM Virtual Traffic Monitoring
Linux KVM Virtual Traffic Monitoring East-West traffic visibility Scott Harvey Director of Engineering October 7th, 2015 apcon.com Speaker Bio Scott Harvey Director of Engineering at APCON Responsible
SOLUTIONS FOR DEPLOYING SERVER VIRTUALIZATION IN DATA CENTER NETWORKS
WHITE PAPER SOLUTIONS FOR DEPLOYING SERVER VIRTUALIZATION IN DATA CENTER NETWORKS Copyright 2010, Juniper Networks, Inc. 1 Table of Contents Executive Summary........................................................................................................
Expert Reference Series of White Papers. VMware vsphere Distributed Switches
Expert Reference Series of White Papers VMware vsphere Distributed Switches [email protected] www.globalknowledge.net VMware vsphere Distributed Switches Rebecca Fitzhugh, VCAP-DCA, VCAP-DCD, VCAP-CIA,
Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV
Technology Brief Optimize Server ization with QLogic s 10GbE Secure SR-IOV Flexible, Secure, and High-erformance Network ization with QLogic 10GbE SR-IOV Solutions Technology Summary Consolidation driven
Cisco Nexus 1000V Switch for Microsoft Hyper-V
Data Sheet Cisco Nexus 1000V Switch for Microsoft Hyper-V Product Overview Cisco Nexus 1000V Switches provide a comprehensive and extensible architectural platform for virtual machine and cloud networking.
Enterasys Data Center Fabric
TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
REMOVING THE BARRIERS FOR DATA CENTRE AUTOMATION
REMOVING THE BARRIERS FOR DATA CENTRE AUTOMATION The modern data centre has ever-increasing demands for throughput and performance, and the security infrastructure required to protect and segment the network
Leveraging NIC Technology to Improve Network Performance in VMware vsphere
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Cisco and Canonical: Cisco Network Virtualization Solution for Ubuntu OpenStack
Solution Overview Cisco and Canonical: Cisco Network Virtualization Solution for Ubuntu OpenStack What You Will Learn Cisco and Canonical extend the network virtualization offered by the Cisco Nexus 1000V
SDN Applications in Today s Data Center
SDN Applications in Today s Data Center Harry Petty Director Data Center & Cloud Networking Cisco Systems, Inc. Santa Clara, CA USA October 2013 1 Customer Insights: Research/ Academia OpenFlow/SDN components
Impact of Virtualization on Cloud Networking Arista Networks Whitepaper
Overview: Virtualization takes IT by storm The adoption of virtualization in datacenters creates the need for a new class of networks designed to support elasticity of resource allocation, increasingly
How Linux kernel enables MidoNet s overlay networks for virtualized environments. LinuxTag Berlin, May 2014
How Linux kernel enables MidoNet s overlay networks for virtualized environments. LinuxTag Berlin, May 2014 About Me: Pino de Candia At Midokura since late 2010: Joined as a Software Engineer Managed the
Citrix XenServer Design: Designing XenServer Network Configurations
Citrix XenServer Design: Designing XenServer Network Configurations www.citrix.com Contents About... 5 Audience... 5 Purpose of the Guide... 6 Finding Configuration Instructions... 6 Visual Legend... 7
Software Defined Networking Basics
Software Defined Networking Basics Anupama Potluri School of Computer and Information Sciences University of Hyderabad Software Defined Networking (SDN) is considered as a paradigm shift in how networking
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
Virtualized Access Layer. Petr Grygárek
Virtualized Access Layer Petr Grygárek Goals Integrate physical network with virtualized access layer switches Hypervisor vswitch Handle logical network connection of multiple (migrating) OS images hosted
Nutanix Tech Note. VMware vsphere Networking on Nutanix
Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking
Ethernet Fabrics: An Architecture for Cloud Networking
WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic
Windows Server 2012 Hyper-V Extensible Switch and Cisco Nexus 1000V Series Switches
Windows Server 2012 Hyper-V Extensible Switch and Cisco Nexus 1000V Series Switches Streamlining Virtual Networks in the Data Center A Microsoft/Cisco White Paper May 2012 Windows Server 2012 Hyper-V Extensible
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
VLANs. Application Note
VLANs Application Note Table of Contents Background... 3 Benefits... 3 Theory of Operation... 4 IEEE 802.1Q Packet... 4 Frame Size... 5 Supported VLAN Modes... 5 Bridged Mode... 5 Static SSID to Static
Gaining Control of Virtualized Server Environments
Gaining Control of Virtualized Server Environments By Jim Metzler, Ashton Metzler & Associates Distinguished Research Fellow and Co-Founder, Webtorials Editorial/Analyst Division Introduction The traditional
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters
WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly
Simplifying Data Center Network Architecture: Collapsing the Tiers
Simplifying Data Center Network Architecture: Collapsing the Tiers Abstract: This paper outlines some of the impacts of the adoption of virtualization and blade switches and how Extreme Networks can address
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
IOS110. Virtualization 5/27/2014 1
IOS110 Virtualization 5/27/2014 1 Agenda What is Virtualization? Types of Virtualization. Advantages and Disadvantages. Virtualization software Hyper V What is Virtualization? Virtualization Refers to
ConnectX -3 Pro: Solving the NVGRE Performance Challenge
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com
SDN 101: An Introduction to Software Defined Networking citrix.com Over the last year, the hottest topics in networking have been software defined networking (SDN) and Network ization (NV). There is, however,
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
Virtualization, SDN and NFV
Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG North Core Distribution Access South North Peering #1 Upstream #1 Series of Tubes Upstream #2 Core Distribution Access Cust South Internet West
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN
White Paper Advanced Server Network Virtualization (NV) Acceleration for VXLAN August 2012 Overview In today's cloud-scale networks, multiple organizations share the same physical infrastructure. Utilizing
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
Broadcom Ethernet Network Controller Enhanced Virtualization Functionality
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based
Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters
Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior
Small is Better: Avoiding Latency Traps in Virtualized DataCenters
Small is Better: Avoiding Latency Traps in Virtualized DataCenters SOCC 2013 Yunjing Xu, Michael Bailey, Brian Noble, Farnam Jahanian University of Michigan 1 Outline Introduction Related Work Source of
Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking
Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking Roberto Bonafiglia, Ivano Cerrato, Francesco Ciaccia, Mario Nemirovsky, Fulvio Risso Politecnico di Torino,
VXLAN Performance Evaluation on VMware vsphere 5.1
VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Analysis of Network Segmentation Techniques in Cloud Data Centers
64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology
Underneath OpenStack Quantum: Software Defined Networking with Open vswitch
Underneath OpenStack Quantum: Software Defined Networking with Open vswitch Principal Software Engineer Red Hat, Inc. April 24, 2013 1 Part One Why Open vswitch? Open vswitch enables Linux to become part
White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.
White Paper Juniper Networks Solutions for VMware NSX Enabling Businesses to Deploy Virtualized Data Center Environments Copyright 2013, Juniper Networks, Inc. 1 Table of Contents Executive Summary...3
CloudLink - The On-Ramp to the Cloud Security, Management and Performance Optimization for Multi-Tenant Private and Public Clouds
- The On-Ramp to the Cloud Security, Management and Performance Optimization for Multi-Tenant Private and Public Clouds February 2011 1 Introduction Today's business environment requires organizations
Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre
Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre Wilfried van Haeren CTO Edgeworx Solutions Inc. www.edgeworx.solutions Topics Intro Edgeworx Past-Present-Future
Network Virtualization
Network Virtualization Petr Grygárek 1 Network Virtualization Implementation of separate logical network environments (Virtual Networks, VNs) for multiple groups on shared physical infrastructure Total
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure
W h i t e p a p e r NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
Hyper-V Networking. Aidan Finn
Hyper-V Networking Aidan Finn About Aidan Finn Technical Sales Lead at MicroWarehouse (Dublin) Working in IT since 1996 MVP (Virtual Machine) Experienced with Windows Server/Desktop, System Center, virtualisation,
An Oracle Technical White Paper November 2011. Oracle Solaris 11 Network Virtualization and Network Resource Management
An Oracle Technical White Paper November 2011 Oracle Solaris 11 Network Virtualization and Network Resource Management Executive Overview... 2 Introduction... 2 Network Virtualization... 2 Network Resource
DMZ Virtualization Using VMware vsphere 4 and the Cisco Nexus 1000V Virtual Switch
DMZ Virtualization Using VMware vsphere 4 and the Cisco Nexus 1000V Virtual Switch What You Will Learn A demilitarized zone (DMZ) is a separate network located in the neutral zone between a private (inside)
WHITE PAPER. Network Virtualization: A Data Plane Perspective
WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015 Abstract Virtualization is the leading technology to provide agile and scalable
Control Tower for Virtualized Data Center Network
Control Tower for Virtualized Data Center Network Contents 1 Virtual Machine Network Environment Analysis...3 2 "Control Tower" Must Have an Overall Picture of the Network...4 3 Virtual Machine Migration
Ethernet-based Software Defined Network (SDN)
Ethernet-based Software Defined Network (SDN) Tzi-cker Chiueh Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 Cloud Data Center Architecture Physical Server
hp ProLiant network adapter teaming
hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2
State of the Art Cloud Infrastructure
State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks
Securely Architecting the Internal Cloud. Rob Randell, CISSP Senior Security and Compliance Specialist VMware, Inc.
Securely Architecting the Internal Cloud Rob Randell, CISSP Senior Security and Compliance Specialist VMware, Inc. Securely Building the Internal Cloud Virtualization is the Key How Virtualization Affects
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
How To Manage A Virtualization Server
Brain of the Virtualized Data Center Contents 1 Challenges of Server Virtualization... 3 1.1 The virtual network breaks traditional network boundaries... 3 1.2 The live migration function of VMs requires
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Windows Server 2008 R2 Hyper V. Public FAQ
Windows Server 2008 R2 Hyper V Public FAQ Contents New Functionality in Windows Server 2008 R2 Hyper V...3 Windows Server 2008 R2 Hyper V Questions...4 Clustering and Live Migration...5 Supported Guests...6
vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
Getting More Performance and Efficiency in the Application Delivery Network
SOLUTION BRIEF Intel Xeon Processor E5-2600 v2 Product Family Intel Solid-State Drives (Intel SSD) F5* Networks Delivery Controllers (ADCs) Networking and Communications Getting More Performance and Efficiency
The Road to SDN: Software-Based Networking and Security from Brocade
WHITE PAPER www.brocade.com SOFTWARE NETWORKING The Road to SDN: Software-Based Networking and Security from Brocade Software-Defined Networking (SDN) presents a new approach to rapidly introducing network
OPTIMIZING SERVER VIRTUALIZATION
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
