Implementing a Scalable Self-Virtualizing Network Interface on an Embedded Multicore Platform

Size: px
Start display at page:

Download "Implementing a Scalable Self-Virtualizing Network Interface on an Embedded Multicore Platform"

Transcription

1 Implementing a Scalable Self-Virtualizing Network Interface on an Embedded Multicore Platform Himanshu Raj, Karsten Schwan CERCS, Georgia Institute of Technology Atlanta, 3332 (rhim, schwan)@cc.gatech.edu Abstract Some of the costs associated with system virtualization using a virtual machine monitor (VMM) or hypervisor (HV) can be attributed to the need to virtualize peripheral devices and to then manage the virtualized resources for large numbers of guest domains. For higher end, smart IO devices, it is possible to offload selected virtualization functionality onto the devices themselves, thereby creating self-virtualizing devices. A self-virtualizing device is aware of the fact that it is being virtualized. It implements this awareness (1) by presenting to the HV an interface that enables the HV to manage virtual devices on demand, and (2) by presenting to a guest OS a virtual device abstraction that permits it to interact with the device with minimal HV involvement. This paper presents the design of a self-virtualizing network device and its implementation on a modern multi-core network platform, an IXP24 network processor-based development board. Initial performance results demonstrate that the performance of virtual interfaces is similar to that of regular network interfaces. The longer term goal of this work is to better understand how modern multi-core platforms can be used to realize light-weight, scalable virtualization for future server platforms. 1 Introduction Virtualization functionality is becoming an integral element of processor architectures ranging from high end server architectures, to lower end PowerPC and x86-based machines [4]. However, the hypervisors (HVs) or Virtual Machine Monitors (VMMs) running on such machines not only have to virtualize processors, but they also have to carry out the following two equally important tasks: 1. virtualize all of the platform s physical resources, including its peripheral devices, and 2. manage these virtualized components for the multiple guest OSes (domains) running on the platform. This paper focuses on the virtualization of peripheral resources (i.e., I/O device virtualization). Current practice is to manage peripherals in the hypervisor itself or in some trusted control partition. In either case, the controlling entity must export to guest domains an interface to a virtualized device that provides multiplex/demultiplex access to the actual physical device, where all accesses to the device must go through this controlling entity. Current implementations of I/O device virtualization use a distinct I/O controller domain for all devices [13], a controller domain per device [15], or a scheme in which the driver code runs as part of hypervisor itself [7]. The latter approach has been dismissed due to the need to avoid hypervisor failures caused by potentially faulty device drivers. The former approaches involve context switching among multiple domains for each device access. Recent advances in hardware-supported virtualization [4] do not offer specific support for peripheral virtualization. Our approach attempts to reduce the costs of peripheral virtualization with self-virtualizing devices. A self-virtualizing device (1) presents to the hypervisor (HV) an interface that enables the HV to manage virtual devices on demand, and (2) presents to a guest OS a virtual device abstraction that permits it to interact with the device with minimal HV involvement. One example of a device with similar functionality is the SCSI controller that supports RAID [2]. This controller can provide multiple logical disk drives that look like single physical disk drive to the OS. This paper presents the design of a self-virtualizing network interface realized on a IXP24 network processor based board [1]. This processor has multiple processing cores capable of independently running different execution threads. This internal concurrency provides a suitable platform for a scalable im-

2 plementation of self-virtualization functionality that offers to applications both high bandwidth and low end-to-end latency. Experimental results described in Section 5 demonstrate virtual devices capable of operating at full link speeds for 1 Mbps ethernet links with available TCP bandwidth and latency of 94Mbps and.127ms respectively. At gigabit link speeds, the PCI performance dominates the overall performance of virtual devices with available TCP bandwidth and latency of 156Mbps and.76ms respectively. Performance of virtual devices also scales with increasing number of virtual devices at a host. In the remainder of this paper, we first provide the overall design of the self-virtualizing network interface and describe its functionalities. This is followed by specific details of our prototype implementation. Next, we present the experimental setup followed by experimental results. The paper concludes with a summary of results and a description of future work. 2 A Self-virtualizing Network Interface This section describes the design of a self-virtualizing network interface using the IXP24 network processor(np)-based enp2611 board. This board resides in a host as a PCI add-on. The board exports the NP s SDRAM and configuration registers to the host via a non-transparent PCI-to-PCI bridge [3]. The bridge also provides a PCI window for host memory access from NP, which can be dynamically configured to map to a certain area of host memory. The board itself hosts a XScale (ARM) processing core used for control functions and 8 specialized processing cores, termed micro-engines, for carrying out network I/O functions. Each micro-engine supports 8 hardware contexts with minimal context switching overhead. As will become evident in the next two sections, this environment provides the flexibility (i.e., programmability) and concurrency needed to efficiently implement device virtualization functionality. In particular, the functionality implemented on that platform realizes virtual interfaces (VIFs) for the network device, where each VIF consists of two message queues, one for outgoing messages (send queue) and one for incoming messages (receive queue). These queues have configurable sizes that determine transmit and receive buffer lengths. Every VIF is assigned a unique id, and it has an associated pair of signals. One is sent by the NP to the host device driver indicating a packet transmission initiated by driver is complete; the other is sent by NP to the host driver indicating a packet has been received for the associated VIF, and it is enqueued for processing by the driver. Network packets are sent and received using PCI read/write transactions on the associated message queues. We next describe the self-virtualizing device s two main functionalities: (1) managing VIFs and (2) performing network I/O. 2.1 Virtual Interface Management This functionality includes the creation of VIFs, their removal, and changing attributes and resources associated with VIFs. Our current implementation utilizes both the XScale core and the micro-engines. The implementation places control functionality onto the XScale core, while fast path I/O functions are executed by micro-engines. Specifically, a management request is initiated from the host to the XScale core. The XScale core appropriates resources according to the request and communicates this resource modification to the host and micro-engines. The microengines apply these modifications to the network I/O subsystem. Subsequent I/O actions are executed by micro-engines, as explained in more detail next. 2.2 Network I/O Network I/O is completely managed by microengines. It can be further subdivided into two parts: egress and ingress. Egress deals with multiplexing multiple VIFs packets from their send queues onto physical network ports. Ingress deals with demultiplexing the packets received from physical network ports onto appropriate VIFs receive queues. Since VIFs export a regular ethernet device abstraction to the host, this implementation models a software layer 2 switch. In our current implementation, the XScale core appropriates one micro-engine context per VIF for egress. This context is selected among a pool of contexts belonging to a single micro-engine in a round robin fashion for simple load balancing. Ingress is managed for all VIFs by a pool of contexts on a micro-engine. Two other micro-engines are used for physical network input and output, respectively. Hence, we are still operating at 5% of the capacity. We plan to use the rest of the micro-engines for a more scalable implementation in future. 3 Selected Implementation Details The device supports two configurations. In one configuration, both of the message queues associated with a VIF are implemented in the NP s SDRAM, hereafter referred to as host-only. This configuration is useful when the firmware present on the device does not provide the capability for upstream PCI transactions, as was the case with our previous firmware

3 version. In this case, none of the host s resources are visible to the NP, and only the host can read or write data from the NP s SDRAM. This also disables the use of the DMA engines present on the board for data transfers from/to host memory. Both egress and ingress are performed by the host via programmed I/O using PCI read/write transactions. This seriously impacts device performance, as discussed in Section 5. In the other configuration, the send message queue is implemented in NP s SDRAM, while the receive message queue is implemented in host memory, hereafter referred to as the host-np configuration. On the egress path, the host side driver places frames into the send message queue using PCI write transactions, which are read locally from the NP s SDRAM by micro-engines. On the ingress path, micro-engines place frames into receive message queue using PCI write transactions, which are read locally from host memory by the host side driver. Signals associated with a VIF are implemented as PCI interrupts. The device has a master PCI interrupt and a 16 bit interrupt identifier. These bits are shared by multiple VIFs in case the total number of VIFs exceed 8 (since each VIF requires two signals). Thus, an interrupt can result in redundant signaling of VIFs sharing the same id bit. In the future, we plan to replace the interrupt id sharing with a more asynchronous messaging system, where messages will contain the exact id of the VIF to be signaled. These messages will be managed in a shared message queue among the host and the NP. 4 Experimentation Setup In this paper, our focus is to evaluate the costs of device self-virtualization. Accordingly, the experiments compare non-virtualized to virtualized devices, ignoring the host-level use of virtualization technologies. Specifically, we run a standard Linux kernel on the host, then evaluate virtual interfaces as if they were regular ethernet devices on the host. In this configuration, the host appears as a hotplug capable system behind a switch that can provide virtual network interfaces to the host on demand. This switch is implemented by the NP-based board and connects all VIFs on the host to the rest of the network. The physical link connects one gigabit port on the NPbased board to the rest of LAN segment. A host using a virtual interface provided by NP-based board for network communication is hereafter termed as VNIC host. Other hosts using a regular broadcom gigabit ethernet controller for network communication are hereafter termed as REG hosts. Figure 1 shows VNIC_host host enp2611 PCI 1Gbps link... GigE Switch Gigabit uplink GigE Switch Figure 1: Network topology REG_host 1Mbps link 1Mbps Switch Gigabit uplink... the abstract network topology. Note that REG hosts are connected via a 1Mbps switch, which effectively limits the bandwidth of these interfaces to 1Mbps. All hosts are dual 2-way SMT Pentium Xeon 2.8GHz servers with 2GB RAM running linux kernel with RHEL 4 distribution. The embedded boards are IXP24-based with 256MB SDRAM running the Linux kernel with the monta-vista preview kit 3. distribution. 5 Experimental Results 5.1 Self-virtualizing network device without host virtualization In this section, we primarily report the performance of an initial self-virtualized NIC implementation for host-np configuration. We discuss some performance implications of using host-only configuration in Basic thesis behind these experiments is that selfvirtualization can be obtained at a low cost and the performance of virtual interfaces is at least similar to that of regular network interfaces with proper IO configuration Latency We use a simple libpcap [6] client server application to measure the round trip time between two hosts, at least one of which is a VNIC host. The client sends 64 byte messages to server using packet sockets and SOCK RAW mode. This packet is directly handed to the device driver without any network layer processing. The server, which receives the packet directly from the device driver echoes the message back to client. The round trip time for this message send/receive is reported as the RTT. This RTT serves as an indicator of the inherent latency of the selfvirtualizing network interface implementation. For a detailed discussion of the breakdown of overall self-

4 RTT (ms) REG_IF to REG_IF VIF to VIF VIF to REG_IF #vifs=1 #vifs=2 #vifs=4 #vifs=8 Figure 2: RTT for host-np configuration virtualization costs, refer to Section 5.2. For measurements across two VNIC hosts, we use an n:1x1 access pattern, where n is the number of VIFs on both hosts. In this pattern, n libpcap client/server pairs are run, each utilizing a different VIF on request sender and a different VIF on responder. For measurements across a VNIC host and a REG host, we can use either a n:nx1 or a n:1xn access pattern, where n is the number of VIFs on the VNIC host. In the n:nx1 access pattern, each of the n libpcap clients utilize a different VIF on request sender and the libpcap server is run on the same interface on the peer node. In the n:1xn pattern, each of the n libpcap clients use the same interface of sender but a different VIF on the peer node. All libpcap clients are started (almost) simultaneously and are configured to send a packet every.2 seconds. We report the average of the average RTT times for n libpcap client server sessions, where n is the number of VIFs on the VNIC host. Figure shows RTT results for the host-np configuration. The RTT between two VIFs on separate VNIC hosts is smaller than that of the RTT between two REG IFs on separate REG hosts. This is likely due to a faster vs. slower switch (refer to Figure 1). The RTT between a VIF on a VNIC host and a REG IF on a REG host is slightly higher than that of the RTT between two REG hosts. We believe that the difference can be attributed to the additional switch crossing. With increasing number of VIFs, the end-to-end latency scales in the beginning. However, with larger number of VIFs, the performance degrades rapidly. We believe this degradation results from host s inability to handle large number of interrupts, since the cost of self-virtualization doesn t increase a lot with large number of VIFs (refer to section 5.2 for details). We hope to eradicate this issue via interrupt-batching for multiple packets and employing host side polling at regular intervals for packet reception. In any case, these experiments demonstrate that end-to-end latency using self-virtualizing interfaces is better or almost similar to that of using regular hardware interfaces. This configuration also demonstrates good initial scalability in terms of latency with increasing number of VIFs on VNIC host Bandwidth We use iperf [5] to measure the achievable bandwidth for virtual interfaces. In these experiments, the client sends data to the server for 5 seconds over a TCP connection with default parameters. Three separate cases are considered, each with both host-only and host-np configurations: 1. Both the iperf client and the server are run bound to VIFs on different VNIC hosts. 2. The iperf client is bound to a VIF on a VNIC host, and the server is run on a REG host. 3. The iperf server is bound to a VIF on a VNIC host, and the client is run on a REG host. For the host-only configuration, the measured average bandwidth is 25.6 Mbits/sec for Case 1 and Case 3, while it is 94 Mbits/sec for Case 2. The bandwidth achieved for Case 2 is similar to the average bandwidth achieved between two REG hosts, whilst in the other cases, there is a large performance difference. This performance difference is entirely due to the relatively high costs of PCI read vs. PCI write from the host, as is evident from the performance measurements for the host-np configuration. For the host-np configuration, the measured average bandwidth for Case 1 is 156 Mbits/sec, while for both Cases 2 and 3 it is 94 Mbits/sec, which is similar to the performance between two REG hosts. This demonstrates that the cost of self-virtualizing itself is low, it is the IO performance that dominates the overall cost and hence the network IO path must be configured carefully. The average bandwidth degrades linearly as the number of VIFs is increased and simultaneous iperf measurements are performed. For example, for 2 VIFs, the average bandwidth of each stream is 47 Mbits/sec for Case Self-virtualization micro-benchmarks In order to better assess the costs associated with self-virtualization, we microbenchmark specific parts of the micro-engine code and host code in the host- NP configuration.

5 cycles msg_recv pkt_tx total #vifs=1 #vifs= micro seconds Time taken by network IO micro-engine(s) for transmitting the packet on the physical link is not shown here as we consider it a part of network latency. For the ingress path, we consider the following subsections: pkt rx - Dequeuing the packet from the receive queue of the physical port. channel demux - Demultiplexing the packet based on its destination MAC address. This is the most crucial step in the self-virtualization implementation. msg send - Copying the packet into host memory via a PCI write transactions and interrupting the host via a PCI write transaction. cycles (a) Egress Path pkt_rx channel_demux msg_send total (b) Ingress Path Figure 3: Latency Micro-benchmarks #vifs=1 #vifs=8 On micro-engines, we use cycle counting for performance monitoring. Figures 3(a) and 3(b) show the results for egress path and ingress path respectively. The following sub-sections of the egress path are considered: msg recv - The time it takes for the context specific to a VIF to acquire information about a new packet queued up by the host side driver for transmission. This involves polling the send queue in SDRAM. Additional delay can be perceived at the end application due to the scheduling of contexts by the micro-engine. Although the scheduling is non-preemptive, contexts frequently yield the micro-engine for IO. pkt tx - Enqueueing the packet on the transmit queue of the physical port micro seconds Time taken by network IO micro-engine(s) for receiving the packet from the physical link is not shown as we consider it a part of network latency. With increasing number of vifs, the cost of egress path increases due due to increased SDRAM polling contention by micro-engine contexts for message receive from host. Similarly, the cost of ingress path increases due to increased demultiplexing cost. The overall effect of this cost increase on the end-to-end latency is small. For host side performance monitoring, we count cycles for message send and receive via the TSC register. For #vifs = 1, host encumbers a cost of 9.42µS for a message receive (involves memory copy) and an average cost of 14.47µS with 1.3µS variability (in terms of standard deviation) for a message send (involves PCI write). For #vif s = 8, the average cost of message send increases to 17.35µS whilst variability increases to 7.85µS. Possible reasons for increased average cost and variability of message send are PCI bus contention from two processors and increased number of interrupts that arise due to larger number of VIFs. The cost of message receive remains similar to previous case. Host message receive and message send, and egress and ingress paths on the NP account for 59µS in the overall end-to-end latency (#vif s = 1). Rest of it is in the host (context switches, data copy to user buffers and interrupt handling) and network (time on wire and gige switch latency). 6 Related Work Given current trends in processor performance growth, it is usually I/O that is the bottleneck for overall system performance. Specific hardware support, such as virtual channel processors [11] is envisioned in future architectures [9] for improving I/O performance. The self-virtualizing network interface

6 developed in our work is specifically designed to improve the performance of future virtualized systems, by improving their I/O performance. In order to improve network performance for end user applications, multiple configurable network interfaces have been designed, including programmable ethernet NICs [16], Arsenic [12], and an ethernet network interface based on the IXP12 network processor [1]. In this paper, we design a network interface with self-virtualization support based on a IXP24 network processor based embedded board. Another example of such a device is the SCSI controller with ServeRAID technology [2], which can provide virtual block devices on demand. Deploying a separate core for running application specific code has also been used for Virtual Communication Machines [14] and stream handlers [8]. 7 Conclusions and Future Work In this paper, we present the design and an initial implementation of a self-virtualizing network interface device using an IXP24 network processor-based board. Initial performance analysis establishes the viability of the design. The performance of virtual interfaces provided by the self-virtualizing network interface in terms of end-to-end available bandwidth and latency is similar to those of regular network interfaces. The performance scales with increasing number of virtual interfaces. The self-virtualizing device also allows a host to (re)configure virtual interfaces configurable in terms or available resources. These properties of the device makes it an ideal candidate for virtualizing and sharing network resources in a server platform. We plan to use this device for system virtualization, where each guest OS running on top of a hypervisor will own one or more VIFs. We envision lighter-weight and more scalable virtualized systems when using self-virtualizing devices. In particular, the HV on the host will be responsible for managing the virtual interfaces using the API provided by the self-virtualizing device. Once a virtual interface has been configured, major part of network IO will take place without any HV involvment. HV will be responsible for routing the interrupt(s) generated by the self-virtualizing device to appropriate guest domains. Future interrupt sub-system modifications such as larger interrupt space and hardware support for allowing interrupts to be routed directly to guest domains [4] may relieve the HV of interrupt routing responsibility altogether. We also plan to make network IO path more efficient. We will add support for DMA for data copying across PCI bus and benchmark its performance. We also plan to add support for jumbo frames in order to improve performance in terms of achievable bandwidth, specifically for gigabit links. References [1] ENP-2611 Data Sheet datasheet.pdf. [2] IBM eserver xseries ServeRAID Technology. ftp://ftp.software.ibm.com/pc/pccbbs/pc servers pdf/ raidwppr.pdf. [3] Intel Non-transparent PCI-to-PCI Bridge. [4] Intel Virtualization Technology Specification for the IA-32 Intel Architecture. ftp://download.intel.com/technology/computing/ vptech/c pdf. [5] Iperf. [6] Tcpdump/libpcap. [7] Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Pratt, I., Warfield, A., Barham, P., and Neugebauer, R. Xen and the Art of Virtualization. In Proceedings of the ACM Symposium on Operating Systems Principles (October 23). [8] Gavrilovska, A., Mackenzie, K., Schwan, K., and McDonald, A. Stream Handlers: Application-specific Message Services on Attached Network Processors. In Proceedings of the 1th International Symposium of Hot Interconnects (HOT-I 22) (22). [9] Hady, F. T., Bock, T., Cabot, M., Chu, J., Meinechke, J., Oliver, K., and Talarek, W. Platform Level Support for High Throughput Edge Applications: The Twin Cities Prototype. IEEE Network (July/August 23). [1] Mackenzie, K., Shi, W., McDonald, A., and Ganev, I. An Intel IXP12-based Network Interface. In Proceedings of the 23 Annual Workshop on Novel Uses of Systems Area Networks (SAN-2) (February 23). [11] McAuley, D., and Neugebauer, R. A case for Virtual Channel Processors. In Proceedings of the ACM SIG- COMM 23 Workshops (23). [12] Pratt, I., and Fraser, K. Arsenic: A User Accessible Gigabit Network Interface. In Proceedings of the IEEE INFOCOM (21). [13] Pratt, I., Fraser, K., Hand, S., Limpach, C., Warfield, A., Magenheimer, D., Nakajima, J., and Mallick, A. Xen 3. and the Art of Virtualization. In Proceedings of the Ottawa Linux Symposium (25). [14] Rosu, M., Schwan, K., and Fujimoto, R. Supporting Parallel Applications on Clusters of Workstations: The Virtual Communication Machine-based Architecture. In Proceedings of Cluster Computing (1998). [15] Uhlig, V., LeVasseur, J., Skoglund, E., and Dannowski, U. Towards Scalable Multiprocessor Virtual Machines. In Proceedings of the 3rd Virtual Machine Research and Technology Symposium (San Jose, CA, May 24), pp [16] Willmann, P., Kim, H., Rixner, S., and Pai, V. An Efficient Programmable 1 Gigabit Ethernet Network Interface Card. In Proceedings of the 11th International Symposium on High-Performance Computer Architecture (25).

Self-Virtualized I/O: High Performance, Scalable I/O Virtualization in Multi-core Systems

Self-Virtualized I/O: High Performance, Scalable I/O Virtualization in Multi-core Systems Self-Virtualized I/O: High Performance, Scalable I/O Virtualization in Multi-core Systems Himanshu Raj Ivan Ganev Karsten Schwan CERCS, College of Computing Georgia Institute of Technology Atlanta, GA

More information

36 January/February 2008 ACM QUEUE rants: feedback@acmqueue.com

36 January/February 2008 ACM QUEUE rants: feedback@acmqueue.com 36 January/February 2008 ACM QUEUE rants: feedback@acmqueue.com Virtu SCOTT RIXNER, RICE UNIVERSITY Network alization Shared I/O in ization platforms has come a long way, but performance concerns remain.

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized

More information

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms

More information

Models For Modeling and Measuring the Performance of a Xen Virtual Server

Models For Modeling and Measuring the Performance of a Xen Virtual Server Measuring and Modeling the Performance of the Xen VMM Jie Lu, Lev Makhlis, Jianjiun Chen BMC Software Inc. Waltham, MA 2451 Server virtualization technology provides an alternative for server consolidation

More information

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.

More information

Intel DPDK Boosts Server Appliance Performance White Paper

Intel DPDK Boosts Server Appliance Performance White Paper Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks

More information

D1.2 Network Load Balancing

D1.2 Network Load Balancing D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June ronald.vanderpol@sara.nl,freek.dijkstra@sara.nl,

More information

Guest VM. Dom0. V = CPU in guest mode C = CPU in VMM mode S = Sidecore in VMM mode. User. User. vmexit. shm sidecall

Guest VM. Dom0. V = CPU in guest mode C = CPU in VMM mode S = Sidecore in VMM mode. User. User. vmexit. shm sidecall Re-architecting VMMs for Multicore Systems: The Sidecore Approach Sanjay Kumar, Himanshu Raj, Karsten Schwan, Ivan Ganev College of Computing Georgia Institute of Technology Atlanta, Georgia 3332 Email:

More information

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing

VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing Journal of Information & Computational Science 9: 5 (2012) 1273 1280 Available at http://www.joics.com VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing Yuan

More information

VMWARE WHITE PAPER 1

VMWARE WHITE PAPER 1 1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

More information

Frequently Asked Questions

Frequently Asked Questions Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Xen and the Art of. Virtualization. Ian Pratt

Xen and the Art of. Virtualization. Ian Pratt Xen and the Art of Virtualization Ian Pratt Keir Fraser, Steve Hand, Christian Limpach, Dan Magenheimer (HP), Mike Wray (HP), R Neugebauer (Intel), M Williamson (Intel) Computer Laboratory Outline Virtualization

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

Virtual Machine Scalability on Multi-Core Processors Based Servers for Cloud Computing Workloads

Virtual Machine Scalability on Multi-Core Processors Based Servers for Cloud Computing Workloads Virtual Machine Scalability on Multi-Core Processors Based Servers for Cloud Computing Workloads M. Hasan Jamal, Abdul Qadeer, and Waqar Mahmood Al-Khawarizmi Institute of Computer Science University of

More information

Implementation and Performance Evaluation of M-VIA on AceNIC Gigabit Ethernet Card

Implementation and Performance Evaluation of M-VIA on AceNIC Gigabit Ethernet Card Implementation and Performance Evaluation of M-VIA on AceNIC Gigabit Ethernet Card In-Su Yoon 1, Sang-Hwa Chung 1, Ben Lee 2, and Hyuk-Chul Kwon 1 1 Pusan National University School of Electrical and Computer

More information

The proliferation of the raw processing

The proliferation of the raw processing TECHNOLOGY CONNECTED Advances with System Area Network Speeds Data Transfer between Servers with A new network switch technology is targeted to answer the phenomenal demands on intercommunication transfer

More information

High-performance vnic framework for hypervisor-based NFV with userspace vswitch Yoshihiro Nakajima, Hitoshi Masutani, Hirokazu Takahashi NTT Labs.

High-performance vnic framework for hypervisor-based NFV with userspace vswitch Yoshihiro Nakajima, Hitoshi Masutani, Hirokazu Takahashi NTT Labs. High-performance vnic framework for hypervisor-based NFV with userspace vswitch Yoshihiro Nakajima, Hitoshi Masutani, Hirokazu Takahashi NTT Labs. 0 Outline Motivation and background Issues on current

More information

Optimizing Network Virtualization in Xen

Optimizing Network Virtualization in Xen Optimizing Network Virtualization in Xen Aravind Menon EPFL, Lausanne aravind.menon@epfl.ch Alan L. Cox Rice University, Houston alc@cs.rice.edu Willy Zwaenepoel EPFL, Lausanne willy.zwaenepoel@epfl.ch

More information

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

Leveraging NIC Technology to Improve Network Performance in VMware vsphere Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

QoS & Traffic Management

QoS & Traffic Management QoS & Traffic Management Advanced Features for Managing Application Performance and Achieving End-to-End Quality of Service in Data Center and Cloud Computing Environments using Chelsio T4 Adapters Chelsio

More information

Computer Organization & Architecture Lecture #19

Computer Organization & Architecture Lecture #19 Computer Organization & Architecture Lecture #19 Input/Output The computer system s I/O architecture is its interface to the outside world. This architecture is designed to provide a systematic means of

More information

A Network Interface Card Architecture for I/O Virtualization in Embedded Systems

A Network Interface Card Architecture for I/O Virtualization in Embedded Systems A Network Interface Card Architecture for I/O Virtualization in Embedded Systems Holm Rauchfuss Technische Universität München Institute for Integrated Systems D-80290 Munich, Germany holm.rauchfuss@tum.de

More information

Networking Virtualization Using FPGAs

Networking Virtualization Using FPGAs Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical and Computer Engineering University of Massachusetts,

More information

COS 318: Operating Systems. Virtual Machine Monitors

COS 318: Operating Systems. Virtual Machine Monitors COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

1000Mbps Ethernet Performance Test Report 2014.4

1000Mbps Ethernet Performance Test Report 2014.4 1000Mbps Ethernet Performance Test Report 2014.4 Test Setup: Test Equipment Used: Lenovo ThinkPad T420 Laptop Intel Core i5-2540m CPU - 2.60 GHz 4GB DDR3 Memory Intel 82579LM Gigabit Ethernet Adapter CentOS

More information

Xen Live Migration. Networks and Distributed Systems Seminar, 24 April 2006. Matúš Harvan Xen Live Migration 1

Xen Live Migration. Networks and Distributed Systems Seminar, 24 April 2006. Matúš Harvan Xen Live Migration 1 Xen Live Migration Matúš Harvan Networks and Distributed Systems Seminar, 24 April 2006 Matúš Harvan Xen Live Migration 1 Outline 1 Xen Overview 2 Live migration General Memory, Network, Storage Migration

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced

More information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

More information

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that

More information

The Load Balancing System Design of Service Based on IXP2400 Yi Shijun 1, a, Jing Xiaoping 1,b

The Load Balancing System Design of Service Based on IXP2400 Yi Shijun 1, a, Jing Xiaoping 1,b Advanced Engineering Forum Vol. 1 (2011) pp 42-46 Online: 2011-09-09 (2011) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/aef.1.42 The Load Balancing System Design of Service Based

More information

PERFORMANCE TUNING ORACLE RAC ON LINUX

PERFORMANCE TUNING ORACLE RAC ON LINUX PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database

More information

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

Performance Analysis of Large Receive Offload in a Xen Virtualized System

Performance Analysis of Large Receive Offload in a Xen Virtualized System Performance Analysis of Large Receive Offload in a Virtualized System Hitoshi Oi and Fumio Nakajima The University of Aizu, Aizu Wakamatsu, JAPAN {oi,f.nkjm}@oslab.biz Abstract System-level virtualization

More information

An Oracle Technical White Paper November 2011. Oracle Solaris 11 Network Virtualization and Network Resource Management

An Oracle Technical White Paper November 2011. Oracle Solaris 11 Network Virtualization and Network Resource Management An Oracle Technical White Paper November 2011 Oracle Solaris 11 Network Virtualization and Network Resource Management Executive Overview... 2 Introduction... 2 Network Virtualization... 2 Network Resource

More information

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology 3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related

More information

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Intel Data Direct I/O Technology (Intel DDIO): A Primer > Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Optimizing Network Virtualization in Xen

Optimizing Network Virtualization in Xen Optimizing Network Virtualization in Xen Aravind Menon EPFL, Switzerland Alan L. Cox Rice university, Houston Willy Zwaenepoel EPFL, Switzerland Abstract In this paper, we propose and evaluate three techniques

More information

Gigabit Ethernet Design

Gigabit Ethernet Design Gigabit Ethernet Design Laura Jeanne Knapp Network Consultant 1-919-254-8801 laura@lauraknapp.com www.lauraknapp.com Tom Hadley Network Consultant 1-919-301-3052 tmhadley@us.ibm.com HSEdes_ 010 ed and

More information

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment

More information

Virtualization for Future Internet

Virtualization for Future Internet Virtualization for Future Internet 2010.02.23 Korea University Chuck Yoo (hxy@os.korea.ac.kr) Why Virtualization Internet today Pro and con Your wonderful research results Mostly with simulation Deployment

More information

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University

More information

Transparent Optimization of Grid Server Selection with Real-Time Passive Network Measurements. Marcia Zangrilli and Bruce Lowekamp

Transparent Optimization of Grid Server Selection with Real-Time Passive Network Measurements. Marcia Zangrilli and Bruce Lowekamp Transparent Optimization of Grid Server Selection with Real-Time Passive Network Measurements Marcia Zangrilli and Bruce Lowekamp Overview Grid Services Grid resources modeled as services Define interface

More information

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking Roberto Bonafiglia, Ivano Cerrato, Francesco Ciaccia, Mario Nemirovsky, Fulvio Risso Politecnico di Torino,

More information

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

Introduction to MPIO, MCS, Trunking, and LACP

Introduction to MPIO, MCS, Trunking, and LACP Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the

More information

Using High Availability Technologies Lesson 12

Using High Availability Technologies Lesson 12 Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?

More information

Collecting Packet Traces at High Speed

Collecting Packet Traces at High Speed Collecting Packet Traces at High Speed Gorka Aguirre Cascallana Universidad Pública de Navarra Depto. de Automatica y Computacion 31006 Pamplona, Spain aguirre.36047@e.unavarra.es Eduardo Magaña Lizarrondo

More information

Microsoft Exchange Server 2003 Deployment Considerations

Microsoft Exchange Server 2003 Deployment Considerations Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers

More information

Fibre Channel over Ethernet in the Data Center: An Introduction

Fibre Channel over Ethernet in the Data Center: An Introduction Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification

More information

Computer Systems Structure Input/Output

Computer Systems Structure Input/Output Computer Systems Structure Input/Output Peripherals Computer Central Processing Unit Main Memory Computer Systems Interconnection Communication lines Input Output Ward 1 Ward 2 Examples of I/O Devices

More information

Terminal Server Software and Hardware Requirements. Terminal Server. Software and Hardware Requirements. Datacolor Match Pigment Datacolor Tools

Terminal Server Software and Hardware Requirements. Terminal Server. Software and Hardware Requirements. Datacolor Match Pigment Datacolor Tools Terminal Server Software and Hardware Requirements Datacolor Match Pigment Datacolor Tools January 21, 2011 Page 1 of 8 Introduction This document will provide preliminary information about the both the

More information

High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features

High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features UDC 621.395.31:681.3 High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features VTsuneo Katsuyama VAkira Hakata VMasafumi Katoh VAkira Takeyama (Manuscript received February 27, 2001)

More information

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled

More information

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand

More information

Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards

Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards From Results on an HP rx4640 Server Table of Contents June 2005 Introduction... 3 Recommended Use Based on Performance and Design...

More information

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

How To Monitor And Test An Ethernet Network On A Computer Or Network Card 3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel

More information

Leased Line + Remote Dial-in connectivity

Leased Line + Remote Dial-in connectivity Leased Line + Remote Dial-in connectivity Client: One of the TELCO offices in a Southern state. The customer wanted to establish WAN Connectivity between central location and 10 remote locations. The customer

More information

A Superior Hardware Platform for Server Virtualization

A Superior Hardware Platform for Server Virtualization A Superior Hardware Platform for Server Virtualization Improving Data Center Flexibility, Performance and TCO with Technology Brief Server Virtualization Server virtualization is helping IT organizations

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

Mesovirtualization: Lightweight Virtualization Technique for Embedded Systems

Mesovirtualization: Lightweight Virtualization Technique for Embedded Systems Mesovirtualization: Lightweight Virtualization Technique for Embedded Systems Megumi Ito Shuichi Oikawa Department of Computer Science, University of Tsukuba 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573,

More information

COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service

COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service Eddie Dong, Yunhong Jiang 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented

More information

Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers

Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers Todd Deshane, Demetrios Dimatos, Gary Hamilton, Madhujith Hapuarachchi, Wenjin Hu, Michael McCabe, Jeanna

More information

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments 8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments QLogic 8Gb Adapter Outperforms Emulex QLogic Offers Best Performance and Scalability in Hyper-V Environments Key Findings The QLogic

More information

SIDN Server Measurements

SIDN Server Measurements SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources

More information

SR-IOV Networking in Xen: Architecture, Design and Implementation Yaozu Dong, Zhao Yu and Greg Rose

SR-IOV Networking in Xen: Architecture, Design and Implementation Yaozu Dong, Zhao Yu and Greg Rose SR-IOV Networking in Xen: Architecture, Design and Implementation Yaozu Dong, Zhao Yu and Greg Rose Abstract. SR-IOV capable network devices offer the benefits of direct I/O throughput and reduced CPU

More information

Performance Evaluation of Linux Bridge

Performance Evaluation of Linux Bridge Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet

More information

Bridging the Gap between Software and Hardware Techniques for I/O Virtualization

Bridging the Gap between Software and Hardware Techniques for I/O Virtualization Bridging the Gap between Software and Hardware Techniques for I/O Virtualization Jose Renato Santos Yoshio Turner G.(John) Janakiraman Ian Pratt Hewlett Packard Laboratories, Palo Alto, CA University of

More information

Performance and scalability of a large OLTP workload

Performance and scalability of a large OLTP workload Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............

More information

Comparison of Novell, Polyserve, and Microsoft s Clustering Performance

Comparison of Novell, Polyserve, and Microsoft s Clustering Performance Technical White Paper Comparison of Novell, Polyserve, and Microsoft s Clustering Performance J Wolfgang Goerlich Written December 2006 Business Abstract Performance is a priority in highly utilized, asynchronous

More information

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Broadcom Ethernet Network Controller Enhanced Virtualization Functionality A Dell Technical White Paper Third party information brought to you, courtesy of Dell. THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

IOS110. Virtualization 5/27/2014 1

IOS110. Virtualization 5/27/2014 1 IOS110 Virtualization 5/27/2014 1 Agenda What is Virtualization? Types of Virtualization. Advantages and Disadvantages. Virtualization software Hyper V What is Virtualization? Virtualization Refers to

More information

Boost Database Performance with the Cisco UCS Storage Accelerator

Boost Database Performance with the Cisco UCS Storage Accelerator Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to

More information

RCL: Software Prototype

RCL: Software Prototype Business Continuity as a Service ICT FP7-609828 RCL: Software Prototype D3.2.1 June 2014 Document Information Scheduled delivery 30.06.2014 Actual delivery 30.06.2014 Version 1.0 Responsible Partner IBM

More information

A Packet Forwarding Method for the ISCSI Virtualization Switch

A Packet Forwarding Method for the ISCSI Virtualization Switch Fourth International Workshop on Storage Network Architecture and Parallel I/Os A Packet Forwarding Method for the ISCSI Virtualization Switch Yi-Cheng Chung a, Stanley Lee b Network & Communications Technology,

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

How To Compare Performance Of A Router On A Hypervisor On A Linux Virtualbox 2.5 (Xen) To A Virtualbox 3.5.2 (Xeen) 2.2.5-Xen-Virtualization (X

How To Compare Performance Of A Router On A Hypervisor On A Linux Virtualbox 2.5 (Xen) To A Virtualbox 3.5.2 (Xeen) 2.2.5-Xen-Virtualization (X Performance Evaluation of Virtual Routers in Para-virtual Environment 1. Abhishek Bajaj abhishek.bajaj@iiitb.net 2. Anargha Biswas anargha.biswas@iiitb.net 3. Ambarish Kumar ambarish.kumar@iiitb.net 4.

More information

Using Multipathing Technology to Achieve a High Availability Solution

Using Multipathing Technology to Achieve a High Availability Solution Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend

More information

White Paper Abstract Disclaimer

White Paper Abstract Disclaimer White Paper Synopsis of the Data Streaming Logical Specification (Phase I) Based on: RapidIO Specification Part X: Data Streaming Logical Specification Rev. 1.2, 08/2004 Abstract The Data Streaming specification

More information

AN EFFECTIVE CROSS RATE PACKET AGGREGATION SCHEME FOR VIRTUALIZED NETWORK CLOUD COMPUTING

AN EFFECTIVE CROSS RATE PACKET AGGREGATION SCHEME FOR VIRTUALIZED NETWORK CLOUD COMPUTING AN EFFECTIVE CROSS RATE PACKET AGGREGATION SCHEME FOR VIRTUALIZED NETWORK CLOUD COMPUTING A. KAMALESWARI 1 AND P. THANGARAJ 2 1 Department of Computer Science and Engineering, Bannari Amman Institute of

More information

New!! - Higher performance for Windows and UNIX environments

New!! - Higher performance for Windows and UNIX environments New!! - Higher performance for Windows and UNIX environments The IBM TotalStorage Network Attached Storage Gateway 300 (NAS Gateway 300) is designed to act as a gateway between a storage area network (SAN)

More information

Tyche: An efficient Ethernet-based protocol for converged networked storage

Tyche: An efficient Ethernet-based protocol for converged networked storage Tyche: An efficient Ethernet-based protocol for converged networked storage Pilar González-Férez and Angelos Bilas 30 th International Conference on Massive Storage Systems and Technology MSST 2014 June

More information

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc.

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc. Filename: SAS - PCI Express Bandwidth - Infostor v5.doc Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI Article for InfoStor November 2003 Paul Griffith Adaptec, Inc. Server

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Iperf Tutorial. Jon Dugan <jdugan@es.net> Summer JointTechs 2010, Columbus, OH

Iperf Tutorial. Jon Dugan <jdugan@es.net> Summer JointTechs 2010, Columbus, OH Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH Outline What are we measuring? TCP Measurements UDP Measurements Useful tricks Iperf Development What are we measuring? Throughput?

More information