A Network Interface Card Architecture for I/O Virtualization in Embedded Systems

Size: px
Start display at page:

Download "A Network Interface Card Architecture for I/O Virtualization in Embedded Systems"

Transcription

1 A Network Interface Card Architecture for I/O Virtualization in Embedded Systems Holm Rauchfuss Technische Universität München Institute for Integrated Systems D Munich, Germany Thomas Wild Technische Universität München Institute for Integrated Systems D Munich, Germany Andreas Herkersdorf Technische Universität München Institute for Integrated Systems D Munich, Germany ABSTRACT In this paper we present an architectural concept for network interface cards (NIC) targeting embedded systems and supporting I/O virtualization. Current solutions for high performance computing do not sufficiently address embedded system requirements i.e., guarantee real-time constraints and differentiated service levels as well as only utilize limited HW resources. The central ideas of our work-in-progress concept are: A scalable and streamlined NIC architecture storing the rule sets (contexts) for virtual network interfaces and associated information like descriptors and producer/consumer lists primarily in the system memory. Only for currently active interfaces or interfaces with special requirements, e.g. hard real-time, the required information is cached on the NIC. By switching between the contexts the NIC can flexibly adapt to service a scalable number of interfaces. With the contexts the proposed architecture also supports differentiated service levels. On the NIC (re-)configurable finite state machines (FSM) are handling the data path for I/O virtualization. This allows a more resource-limited NIC implementation. With a preliminary analysis we estimate the benefits of the proposed architecture and key components of the architecture are outlined. Categories and Subject Descriptors C.4 [Performance of Systems]: Design Studies, Performance Attributes; B.4.2 [Input/Output and Data Communications]: Input/Output Devices Channels and Controllers General Terms Design, Performance Keywords I/O Virtualization, Embedded Systems, Network Interface Card 1. INTRODUCTION Over the last decade(s), virtualization has become a mainstream technique in data centers for better resource utilization by server consolidation. By abstraction, the physical ressources are shared between several virtual machines This paper appeared at the Second Workshop on I/O Virtualization (WIOV 10), March 13, 2010, Pittsburgh, PA, USA. (VM), so called domains. The improvement of underlying virtual machine monitors (VMM) ([1], [2]) and HW ([4]) for data centers have been targeted by research extensively. However virtualization is still an emerging topic for embedded systems, in particular multiprocessor system-on-chips. Their increasing performance and the combination of applications with different requirements on a single shared platform make them particularly well-suited for virtualization. First steps have been taken to analyze and adopt virtualization here ([6], [7]). A critical aspect is the virtualization of I/O, since there the computational overhead and the performance degradation is high, in both data centers and embedded systems. Research for High Performance Computing (HPC) shows that near native throughput i.e., throughput equal to a set-up without virtualization, can be achieved by improvements in SW packet handling and offloading virtualization onto the NIC ([9], [10]). Since their focus is on overall system throughput maximization, but not on resource-limited architectures of NICs, the proposed architectures are not optimal for the usage in embedded systems and their specific requirements. The paper is structured as follows: Section 2 provides an overview on state of the art of I/O virtualization. Section 3 describes the specific requirements for embedded systems and the fundamental concepts of the proposed NIC architecture. A preliminary performance estimation is given in section 4. An exploration of key components is described in section 5. Section 6 outlines future work and summarizes the paper. 2. STATE OF THE ART Sharing physical network access between domains can be implemented in HW, SW or in a mixed mode [12]. The generic solution i.e., VMM only, dedicates one virtualization domain as driver domain and exclusively assigns the network card to it. In such a system, other domains gain network access by transferring packets via a SW-based bridge and front- and back-end device drivers [1]. Several protocol improvements reduce the overhead of the actual transmission of the packets between the domains, a comprehensive overview is given by [11]. I/O virtualization can also be performed within the VMM itself i.e., the hypervisor provides drivers for network cards and switches packets between the domains ([3]).

2 NIC Rx MAC Tx NIC-CPU Management -Mgmt. Signaling Header-Parsing Queueing Scheduling NIC Internal / Instruction Memory System Bus CPU CPU System Memory P/C Lists Rx/Tx Rings Packets communication path. This results in an increased latency and (complex) scheduling dependencies. Processing time of the host cpu and system memory are utilized by this driver domain. If the hypervisor is directly performing I/O virtualization, the trusted computing base of the hypervisor is broadened with side-effects on security, footprint and verification. Multiple-queue network cards are limited in their number of available queue pairs. For supporting a scalable number of domains, such a NIC has either to keep unused pairs in reserve or fallback to SW-based bridging for excess domains. Rx queues are served in the order given by packet arrival, resulting in possible head-of-line blocking for high-priority packets. Figure 1: RiceNIC with central processing on PowerPC CPU A further improvement to the upper scenario is the usage of multi-queue network cards [9] such as Intel s VMDq [13]. Those network cards offer multiple pairs of Tx/Rx queues. This allows HW offloading of packet (de-)multiplexing and queuing for domains based on their MAC address (and VLAN tag). A Tx/Rx pair is assigned to a VM and the driver domain is granted access to the memory region with the respective Tx/Rx buffers. Tx queues are served round-robin. Domains can also directly access a NIC via virtual network interfaces. Apparently, such approaches require extensions of the NIC i.e., dedicated queues, buffers, interfaces and additional management logic. Before a domain can use its virtual network interface the VMM has to configure the NIC accordingly. This concept is presented based on an IXP2400 network processor as a self-virtualizing network card [8]. Here, one microengine is used for demultiplexing Rx traffic and another one for multiplexing Tx traffic. Management of the network card is performed in SW on the NIC XScale CPU. The set-up is restricted to 8 domains, since the microengine is limited to 8 threads. To avoid coordination by the SW on the XScale, none of the other free microengines can be used for processing Rx or Tx traffic in parallel. Direct I/O is also addressed by RiceNIC [10]. Here concurrent network access is provided by a network card based on an FPGA. It contains a PowerPC CPU and several dedicated HW components (see Fig. 1 for an abstract representation). The SW on the PowerPC performs data and control path functions for packet processing. Each virtual network interface requires 388 KB of NIC memory: 4 KB for context and 128 KB each for metadata, Tx buffer and Rx buffer. Although the aforementioned solutions provide near native throughput, they have several shortcomings in respect to their applicability in embedded environments. Similarly, the concept for direct I/O is also restricted by the number of in HW supported virtual network interfaces. The utilized IXP2400 network processor is targeted as line card for packet forwarding and processing i.e., it does not represent an optimal reference architecture for network cards supporting virtualization due to its limited interface to the host. The primary goal of RiceNIC is to have a configurable and flexible NIC architecture. Therefore most functionality is performed by the firmware on the PowerPC. As negative side effect of this, the firmware is in the critical path for all packet processing e.g., header parsing, descriptor generation and packet (de-)multiplexing. Furthermore, extending RiceNIC with extra virtual network interfaces requires additional NIC memory for each of them. Finally, as the overall throughput performance is focus of the I/O virtualization research, minor efforts have been put into resource-limited concepts for network cards themselves. This motivates our proposed concept, that is presented subsequently. 3. CONCEPT FOR AN ES-VNIC ARCHITEC- TURE To better understand the need for efficient I/O virtualization in embedded systems, we give an introductory example here: An automotive head unit for premium cars represents a flexible and high-performance, but still embedded system. It consolidates infotainment (video, audio, Internet access, etc.) and numerous car-related, safety-critical functions (park distance control, user interface for driver assistance systems, warning signals, etc.) on one HW platform and is connected via network to other electronic control units. Based on the actual driving situation, different sets of functions which can be partitioned in domains to achieve robustness via isolation and their communication are active. Those situations can change quickly e.g., jumping from normal radio listening to displaying an urgent traffic warning. Most functions have to be running concurrently to prevent disruptive delays by starting them first. To be usable in an automotive environment, the head unit has also to be implemented in a very cost- and power-efficient way. In case of SW-based bridging and multi-queue network cards rely on a driver domain which is interleaved in the network

3 3.1 Requirement Analysis To fit both embedded systems and I/O virtualization NIC architecture concepts need to address special requirements: The goal of overall maximum throughput has to be complemented with low latency and real-time processing of packets for specific domains. For an embedded system a mix of hard real-time, soft real-time and best-effort domains has to be supported. As example, a hard real-time domain with a networked closed-loop control requires to transmit traffic without jitter as in opposite to a best-effort domain with bursty video streams. Overall, the network card should provide calculable and predictable response time for traffic transfers. With this requirement the usage of SW should not be considered in the critical transmission path either on the NIC itself or via driver domain. Different service levels require enriched methods to process packets and to signal specific events to the VMM and domains. This includes prioritization of packets and interfaces, and also observation of bandwidth guarantees and packet dropping probabilities. The general design of the network card has to include only a limited number of HW components for enabling virtualization. In relation to the power consumption and performance of the complete embedded system the NIC should only contribute a small fraction to it, but still provide high throughput i.e., several 100 Mb/s or higher. Furthermore, the usage of NIC memory should be limited to a minimum. Instead the system memory should be used as much as possible. Performing I/O virtualization by the VMM or domains should be avoided to keep the cores free for actual processing as in embedded system CPU power is usually more spare than in HPC systems. In general, I/O virtualization requires a NIC to perform the following tasks efficiently: Header-Parsing: The header of incoming packets has to be parsed to determine the destination domain. The MAC destination address and VLAN tag of the Ethernet header are only required for layer 2 switching. Buffering: It must be possible to efficiently buffer a packet, because prior packets blocks further processing or packets with higher priority have to be processed first. Scheduling: The NIC should be able to switch processing between packets either due to temporarily blockings or to handle packets of domains with higher priority first. Therefore, the NIC can multiplex outgoing packets from the domains and demultiplex incoming traffic more sophisticated than by simple round-robin. : The NIC should have the ability to transfer a packet to or from the (system) memory on its own. NIC Rx MAC Tx Local Cache for Contexts, P/C Lists, Rx/Tx Queues Header-Parsing FSMs Management Scheduling Queue-Alloc NIC Buffer Signaling System Bus System Memory CPU CPU Contexts P/C Lists Rx/Tx Rings Packets Figure 2: Concept of ES-VNIC architecture Signaling: Based on pre-defined service levels the NIC should be able to individually signal certain events to the VMM or directly to domains. Events can be interrupts for new packet arrival or requesting new descriptors. Management: The basic management for packet processing i.e., (re-)configuration of HW blocks or coordination of the individual tasks should be performed within the NIC. 3.2 Proposed Architecture and Exemplary Packet Processing The above requirements and considerations are driving our proposal for a new Embedded System specific VNIC (ES- VNIC) architecture (see Fig. 2). It should provide the right trade-off between high throughput and QoS combined with real-time versus ultimate throughput (in server or HPC environments with 10s of Gb/s). It relies on a tailored set of finite state machines specifically crafted for handling the tasks described above. By this, the footprint of I/O virtualization in the HW is reduced and better support for real-time constraints and service levels of domains can be provided. By decoupling those FSMs, parallel and pipelined processing is possible. To improve scalability, the resources (queues, caches, buffer) on the NIC are not be constantly occupied by domains or interfaces, but instead assigned (dynamically). Different levels of service may be provided. For interfaces with real-time constraints, configuration and queues always reside within the ES-VNIC. Best-effort interfaces in opposite share available resources i.e., their rule sets are loaded on-demand from system memory replacing the information of inactive interfaces. The NIC contains a standard MAC which is wrapped by flexible HW extensions to enable direct I/O. Those extensions are described best by explaining their interaction for processing an incoming Ethernet packet (see Fig. 3). This figure is a message sequence chart representation of the incoming packet processing: The communication between the

4 MAC NIC Buffer Header-Parsing Scheduling Queue-Alloc Management System Memory Figure 3: Processing packet with ES-VNIC (Rx path) different extensions is visualized by directed lines i.e., handing over data or triggering those extensions. A block stands for a delay in this extension either for processing or storing data. Time is progressing down the Y axis i.e., the figure has to be read from top to down. A packet that arrives at the MAC is temporarily stored in the NIC buffer and the header is sent in parallel to the header-parsing unit where the relevant information regarding to which domain this packet should be routed is extracted. These actions are performed at line speed. As only the header is parsed the header-parsing unit completes before the complete packet is stored at the buffer. The NIC buffer allows to store a maximum-sized Ethernet packet on the whole. It is possible to access any packet arbitrarily. Therefore, packets do not have to be processed in their incoming order e.g., high-priority packets for realtime tasks can be preferred. The address of the packet is handed to the header-parsing unit which combines it with the extracted header information for identifying the packet. With the extracted header information the management FSM can then start to select the context for processing this packet. In this context all relevant information regarding the handling is stored, for example which priority such a packet should have, which are the conditions for signaling the domain of the arrival of the packet, etc. The main store for those contexts is on the system memory in order to limit the resources in the ES-VNIC. Only a small cache for contexts with packets under processing is present on the ES-VNIC. Contexts for critical domains can be pinned to the cache permanently. Contexts for best-effort or low-priority packets instead have to be loaded from system memory, involving writing back contexts which need to be replaced due to the cache size limitation. A context can contain the rule set for a complete domain, but also for individual Rx or Tx network interfaces. A context can have several kilobytes of data due to containing advanced rules, priority settings and configurations. As loading and writing back may take a reasonable amount of time, the management FSM is designed to handle several such processes and contexts in parallel, switching between them to decrease stalling. At any time several packets shall be processed by the ES-VNIC in parallel. Similar to the context, the descriptors and the respective producer/consumer lists (P/C lists) have to be available at the local cache or to be fetched from the system memory if required. The descriptors are stored in generic queues where they can be read by the scheduler. The queue-alloc unit is responsible to assign and fill those queues. Based on the contexts of the current packets, the scheduling unit decides which packet should be processed next and fetches a descriptor from the respective queue. Along the respective address of packet in the NIC buffer, this information is handed over to the unit. The unit will then write the packet over the system bus to the system memory. Afterwards, it informs the management unit about the completion of the action. The respective producer/consumer list is updated and written back to the system memory where it can be read by the domain. Then the management unit configures the signaling unit accordingly to the context i.e., immediate interrupt for the packet or wait for reaching a threshold of packets. The respective signaling concludes the packet processing. The same units are utilized for sending a packet. Only the header-parsing unit is not used as a packet is already associated with a Tx interface and therefore with the respective context. The ES-VNIC management is triggered from the driver to send a packet. The respective context is loaded and the descriptor is read to an allocated queue. If the scheduling unit decides to send this packet the descriptor is handed over to the unit which writes the packet to the NIC buffer. After completely written it is sent out via the MAC. Domains can modify the data structures for context and descriptors in the system memory only after being validated by the hypervisor to prevent erroneous or malicious input. This is abstracted via calls to the hypervisor in the driver for the domain. The hypervisor notifies the ES-VNIC which invalidates cached information and fetches new input from system memory.

5 MAC NIC Internal / Instruction Memory NIC-CPU System Memory can be performed in less clock cycles with finite state machines. Having a pipelined architecture with different stages, that are FSMs, allows the same throughput with a lower frequency than performing the respective tasks in sequential SW on a CPU. These points lead to the work hypothesis that the ES-VNIC architecture needs low and deterministic processing time. Prerequisite is that the FSMs are flexible enough to service a mix of hard real-time, soft real-time and best-effort domains. In a formal approach the processing time by ES-VNIC can be formulated as follows: T DelayRx = max(t NIC Buffer, T Header P arsing ) Figure 4: Processing packet with a CPU-centric NIC (Rx path) 4. PRELIMINARY PERFORMANCE ESTI- MATION Based on the presented ES-VNIC architecture concept we assess a preliminary performance estimation. Focus is on the incoming packet processing sequence as introduced and described for ES-VNIC in section 3. The processing sequence for a network card performing I/O virtualization via CPU firmware like RiceNIC is depicted in Fig. 4. Incoming packets are transferred from the MAC via to the NIC internal memory. Afterwards the NIC CPU is notified. The SW then processes the packet including header-parsing, scheduling and queuing plus managing and configuring the other HW blocks. During processing the SW has to access the NIC internal memory for packet data and instruction code. The number of accesses depends on the size and association of the NIC CPU. After being queued the packet is transferred via to the system memory. A simple qualitative comparison of the sequences reveals the following points: The firmware on the single CPU performing the tasks for I/O virtualization constitute a sequential trail of tasks which due to the processing latency may evolve to a bottleneck. Adding further CPUs is not a favorable solution as it would contradict the goal of a resource-limited implementation. On a CPU with data cache (re-)loading and instruction fetching, it is not optimal to perform tasks like header parsing, queuing or managing descriptors due to the lack of temporal locality (for example header parsing is performed only once per packet). These task + T Management + max(t Scheduling, T Queue Alloc ) + T (1) T NIC Buffer is the time needed to transfer the incoming packet to the NIC Buffer, T Header P arsing to parse the respective header. Both actions are performed in parallel and at line speed. Apparently, T NIC Buffer is dominant here and dependent on the packet size. T Management subsumes setting the configuration for the following FSMs according to the context of this packet. This includes the conditional fetch of this context from the system memory first. If the context is cached, it should only need a few clock cycles to perform this operation. The time for fetching context is dominated by the performance of system bus and memory. Contexts for (hard) real-time interfaces need to be pinned to the cache. On the one hand this constraint results in an easy calculable upper bound for T Management, but on the other hand will reduce the slots for contexts of best-effort or low-priority packets. The queue-alloc and scheduler unit are triggered both by the management unit and run concurrently. The queue-alloc unit needs T Queue Alloc to allocate the needed descriptor and the scheduler unit requires T Scheduling to schedule the next packet to be transferred via. As descriptors need to be fetched from system memory in case that they are not already on the NIC, the queue-alloc unit needs more time to finish. For (hard) real-time packets the queues should therefore already be pre-allocated and the descriptors pre-fetched to guarantee an upper bound. Finally, T is the time needed to transmit a packet from the NIC Buffer to the system memory and depends on packet size and on the performance of system bus and memory. The following term describes the delta time of ES-VNIC i.e., the time which can be spent in each stage of the pipelined architecture for processing a packet:

6 System Memory Rx Rings Tx Rings A B [m] C D [n] System Memory Contexts A B Z [m+n] NIC NIC From P/C Lists Assignable Queues A [o] To Scheduling Local Cache A X X X [v] To P/C Lists A Multithreaded [w] FSMs To Queue-Alloc To Scheduling Figure 5: Key component: Queue-Allocation From Header Parsing Figure 6: Key component: Management (with Contexts) T DeltaRx = max( max(t NIC Buffer, T Header P arsing ), T Management, max(t Scheduling, T Queue Alloc ), T ) (2) If this time matches the rate of consecutive incoming packets, ES-VNIC can cope with the the speed of this traffic so that no packet drops will occur. This is crucial to support network interfaces for hard real-time and critical domains. This time is strongly dominated by the system bus and memory. The performance of the ES-VNIC is apparently driven by the system bus and memory i.e., systematically linked to the performance of the (embedded) system itself. As worst-case scenario for T DeltaRx the requirement to handle a constant flow of packets with minimum frame size and minimum interval for a 1 Gbit/s MAC can be used. A packet size of 64 byte and 20 byte overhead for preamble, start-offrame-delimiter and interframe gap results in: ( ) 8bit 1Gbit/s = 672 nanoseconds (3) This means that every 672 nanoseconds a new packet arrives and has to be processed. With a clock of 125 MHz for Gigabit Ethernet every pipeline stage would have only 84 cycles to complete its task. 5. EXPLORATION OF KEY ARCHITECTURE COMPONENTS We started to model the key components of the proposed ES-VNIC architecture for simulation in SystemC [14]. As described in section 3, the architecture should only utilize flexible HW resources. Focus is therefore on the related FSMs, structures and data elements in queue-allocation (see Fig. 5) and management (see Fig. 6). The focus should be on design, the exploration of the size of local buffers as well as the underlying data paths of the components and efficient loading of contexts. 5.1 Queue-Allocation The Rx and Tx rings that contain the descriptors are stored in the system memory in this example Rx interfaces A, B and Tx interfaces C, D. Their content is defined by the network drivers. On the NIC, a limited set of assignable queues is available. For interfaces with real-time constraints such a queue is blocked and filled with the maximum number of available descriptors. Otherwise, if triggered by a context for either sending or receiving a packet, a queue-allocation is done i.e., if no queue already contains the respective descriptor(s) for this context a queue is reserved and the descriptors are fetched from the system memory. This fetching is done by a dedicated HW engine. In Fig. 5 one queue is blocked for A (depicted by an inscribed A in this queue), the others have to share the second queue. This may result in flushing of descriptors for an inactive context or a context with lower priority. A further fetch is issued if a threshold for the P/C list is reached. That threshold is defined by the context. There can be more or less network interfaces for receiving packets than for sending, since Rx and Tx rings do not have to be paired. With this feature it is possible to have a Tx interface for broadcasting status information and no correspondent Rx interface (if no acknowledges are needed); this is a quite common scenario for embedded systems. Furthermore, to prevent head-of-line blockings for one domain, several Rx interfaces for receiving packets with different service levels can be established. In general, the number of assignable queues (o) is limited and smaller than the number of Rx rings (m) and Tx rings (n) in the system memory i.e., m + n > o. 5.2 Management (with Contexts) Contexts in system memory, cache for them on the NIC, multithreaded FSMs and connections to other units do assembly management. In our example the interfaces A to Z exist and their contexts are kept in system memory (m for Rx interfaces plus n for Tx interfaces). If sending or receiving a packet and not having the respec-

7 tive context in the ES-VNIC the context is fetched from the system memory and stored in a cache slot (v). The data of the context is loaded into one of the multithreaded FSMs (w) by a dedicated HW engine. Using fixed entry points the packet processing management is then started. Loading the context results in two things: First the FSM is (re-)configured i.e., the respective state diagram is modified. By default the state diagram is preset to the most common case for an interface. The context can then add or remove states and transitions adapting the ES-VNIC for processing packet for this specific interface. For example, FSMs for interfaces being polled can be stripped from states and transitions for signaling incoming messages. Another option are additional (security) steps for a critical packet and its interface preventing deletion of the packet from the NIC buffer after being copied in the system memory and being validated there. Second data from the context is used as input for registers that define and trigger the other FSMs (queuealloc, scheduling, P/C lists). For multithreading there are multiple sets of the input and output register for an FSM. By mapping a thread to a packet the ES-VNIC can switch fast between processing of several packets (similar to processing in a multithreaded CPU). Similar to queues in queue-alloc, contexts can be pinned to cache slots and FSMs. In our example here this would be for A representing a hard real-time interface. The other interfaces have to share the other available resources. 6. FUTURE WORK AND SUMMARY Future work comprises of: Simulation of the key components to validate the proposed architecture and the preliminary performance estimations. Here, set-ups which require displacement of contexts, descriptors and P/C lists on the ES-VNIC during run-time are of particular interest. This will involve dimensioning of cache size, packet buffers, queues and the number of multithreaded FSMs as well as functional verification of the those FSMs. Afterwards, the network card architecture should be physically implemented as part of an MPSoC demonstrator in an FPGA to prove the applicability to real world scenarios. In this work-in-progress paper we introduced a new virtualizing NIC architecture concept particularly addressing the requirements of I/O virtualization in embedded systems. We showed that current concepts that address HPC do not match with those requirements. Thus, the needs for this application area have been discussed and a favorable design has been deduced. A preliminary performance estimation and a short presentation of key elements have also been given. 7. REFERENCES [1] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield. Xen and the art of virtualization. In Proceedings of the nineteenth ACM symposium on Operating Systems Principles (SOSP19), ACM Press, [2] A. Kivity, Y. Kamay, and D. Laor. kvm: the Linux Virtual Machine Monitor. In Linux Symposium, [3] M. Mahalingam and R. Brunner. I/O Virtualization (IOV) For Dummies. In VMWorld, [4] L. van Doorn. Hardware virtualization trends. In Proceedings of the 2nd international conference on Virtual execution environments, 2006 (June). [5] A. Menon, A. L. Cox, and W. Zwaenepoel. Optimizing network virtualization in Xen. In Proceedings of the USENIX Annual Technical Conference, 2006 (June). [6] G. Heiser. The role of virtualization in embedded systems. In Proceedings of the 1st workshop on Isolation and integration in embedded systems, 2008 (April). [7] H. Inoue, A. Ikeno, M. Kondo, J. Sakai, and M. Edahiro. VIRTUS: A new processor virtualization architecture for security-oriented next-generation mobile terminals. In Proceedings of the 43rd annual conference on Design automation, [8] H. Raj and K. Schwan. Implementing a scalable self-virtualizing network interface on a multicore platform. In Workshop on the Interaction between Operating Systems and Computer Architecture, 2005 (October). [9] K. K. Ram, J. R. Santos, Y. Turner, A. L. Cox, and S. Rixner. Achieving 10 Gb/s using safe and transparent network interface virtualization. In Proceedings of the 2009 ACM SIGPLAN/SIGOPS international Conference on Virtual Execution Environments. [10] P. Willmann, J. Shafer, D. Carr, A. Menon, S. Rixner, A. L. Cox, and W. Zwaenepoel. Concurrent direct network access for virtual machine monitors. In Proceedings of the International Symposium on High-Performance Computer Architecture, 2007 (February). [11] J. Wang. Survey of State-of-the-art in Inter-VM Communication Mechanisms. In Research Proficiency Report, 2009 (September). [12] J. R. Santos, Y. Turner, and J. Mudigona. Taming Heterogeneous NIC Capabilities for I/O Virtualization. In Proceedings of Workshop on I/O Virtualization, [13] S. Chinni, R. Hiremane. Virtual Machine Device Queues. In Whitepaper, Intel, [14] T. Grötker, S. Liao, G. Martin and S. Swan. System Design with SystemC. In Kluwer Academic Publishers, With this paper, it is our objective to raise awareness for the research of I/O virtualization in embedded system network cards and the new challenges here.

36 January/February 2008 ACM QUEUE rants: feedback@acmqueue.com

36 January/February 2008 ACM QUEUE rants: feedback@acmqueue.com 36 January/February 2008 ACM QUEUE rants: feedback@acmqueue.com Virtu SCOTT RIXNER, RICE UNIVERSITY Network alization Shared I/O in ization platforms has come a long way, but performance concerns remain.

More information

Concurrent Direct Network Access for Virtual Machine Monitors

Concurrent Direct Network Access for Virtual Machine Monitors Concurrent Direct Network Access for Virtual Machine Monitors Paul Willmann Jeffrey Shafer David Carr Aravind Menon Scott Rixner Alan L. Cox Willy Zwaenepoel Rice University Houston, TX {willmann,shafer,dcarr,rixner,alc}@rice.edu

More information

Latency on a Switched Ethernet Network

Latency on a Switched Ethernet Network Application Note 8 Latency on a Switched Ethernet Network Introduction: This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency

More information

Intel DPDK Boosts Server Appliance Performance White Paper

Intel DPDK Boosts Server Appliance Performance White Paper Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

Implementing a Scalable Self-Virtualizing Network Interface on an Embedded Multicore Platform

Implementing a Scalable Self-Virtualizing Network Interface on an Embedded Multicore Platform Implementing a Scalable Self-Virtualizing Network Interface on an Embedded Multicore Platform Himanshu Raj, Karsten Schwan CERCS, Georgia Institute of Technology Atlanta, 3332 (rhim, schwan)@cc.gatech.edu

More information

Performance Analysis of Large Receive Offload in a Xen Virtualized System

Performance Analysis of Large Receive Offload in a Xen Virtualized System Performance Analysis of Large Receive Offload in a Virtualized System Hitoshi Oi and Fumio Nakajima The University of Aizu, Aizu Wakamatsu, JAPAN {oi,f.nkjm}@oslab.biz Abstract System-level virtualization

More information

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized

More information

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Intel Data Direct I/O Technology (Intel DDIO): A Primer > Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Full and Para Virtualization

Full and Para Virtualization Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels

More information

Bridging the Gap between Software and Hardware Techniques for I/O Virtualization

Bridging the Gap between Software and Hardware Techniques for I/O Virtualization Bridging the Gap between Software and Hardware Techniques for I/O Virtualization Jose Renato Santos Yoshio Turner G.(John) Janakiraman Ian Pratt Hewlett Packard Laboratories, Palo Alto, CA University of

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

An Oracle Technical White Paper November 2011. Oracle Solaris 11 Network Virtualization and Network Resource Management

An Oracle Technical White Paper November 2011. Oracle Solaris 11 Network Virtualization and Network Resource Management An Oracle Technical White Paper November 2011 Oracle Solaris 11 Network Virtualization and Network Resource Management Executive Overview... 2 Introduction... 2 Network Virtualization... 2 Network Resource

More information

PCI Express* Ethernet Networking

PCI Express* Ethernet Networking White Paper Intel PRO Network Adapters Network Performance Network Connectivity Express* Ethernet Networking Express*, a new third-generation input/output (I/O) standard, allows enhanced Ethernet network

More information

FlexPath Network Processor

FlexPath Network Processor FlexPath Network Processor Rainer Ohlendorf Thomas Wild Andreas Herkersdorf Prof. Dr. Andreas Herkersdorf Arcisstraße 21 80290 München http://www.lis.ei.tum.de Agenda FlexPath Introduction Work Packages

More information

D1.2 Network Load Balancing

D1.2 Network Load Balancing D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June ronald.vanderpol@sara.nl,freek.dijkstra@sara.nl,

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

Software Stacks for Mixed-critical Applications: Consolidating IEEE 802.1 AVB and Time-triggered Ethernet in Next-generation Automotive Electronics

Software Stacks for Mixed-critical Applications: Consolidating IEEE 802.1 AVB and Time-triggered Ethernet in Next-generation Automotive Electronics Software : Consolidating IEEE 802.1 AVB and Time-triggered Ethernet in Next-generation Automotive Electronics Soeren Rumpf Till Steinbach Franz Korf Thomas C. Schmidt till.steinbach@haw-hamburg.de September

More information

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology 3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related

More information

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL

More information

Optimizing Network Virtualization in Xen

Optimizing Network Virtualization in Xen Optimizing Network Virtualization in Xen Aravind Menon EPFL, Switzerland Alan L. Cox Rice university, Houston Willy Zwaenepoel EPFL, Switzerland Abstract In this paper, we propose and evaluate three techniques

More information

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms

More information

Optimizing Network Virtualization in Xen

Optimizing Network Virtualization in Xen Optimizing Network Virtualization in Xen Aravind Menon EPFL, Lausanne aravind.menon@epfl.ch Alan L. Cox Rice University, Houston alc@cs.rice.edu Willy Zwaenepoel EPFL, Lausanne willy.zwaenepoel@epfl.ch

More information

GigE Vision cameras and network performance

GigE Vision cameras and network performance GigE Vision cameras and network performance by Jan Becvar - Leutron Vision http://www.leutron.com 1 Table of content Abstract...2 Basic terms...2 From trigger to the processed image...4 Usual system configurations...4

More information

Improving Quality of Service

Improving Quality of Service Improving Quality of Service Using Dell PowerConnect 6024/6024F Switches Quality of service (QoS) mechanisms classify and prioritize network traffic to improve throughput. This article explains the basic

More information

4 Internet QoS Management

4 Internet QoS Management 4 Internet QoS Management Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology stadler@ee.kth.se September 2008 Overview Network Management Performance Mgt QoS Mgt Resource Control

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

Virtualization for Future Internet

Virtualization for Future Internet Virtualization for Future Internet 2010.02.23 Korea University Chuck Yoo (hxy@os.korea.ac.kr) Why Virtualization Internet today Pro and con Your wonderful research results Mostly with simulation Deployment

More information

Reducing Cost and Complexity with Industrial System Consolidation

Reducing Cost and Complexity with Industrial System Consolidation WHITE PAPER Multi- Virtualization Technology Industrial Automation Reducing Cost and Complexity with Industrial System Consolidation Virtualization on multi-core Intel vpro processors helps lower overall

More information

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled

More information

White Paper Abstract Disclaimer

White Paper Abstract Disclaimer White Paper Synopsis of the Data Streaming Logical Specification (Phase I) Based on: RapidIO Specification Part X: Data Streaming Logical Specification Rev. 1.2, 08/2004 Abstract The Data Streaming specification

More information

HANIC 100G: Hardware accelerator for 100 Gbps network traffic monitoring

HANIC 100G: Hardware accelerator for 100 Gbps network traffic monitoring CESNET Technical Report 2/2014 HANIC 100G: Hardware accelerator for 100 Gbps network traffic monitoring VIKTOR PUš, LUKÁš KEKELY, MARTIN ŠPINLER, VÁCLAV HUMMEL, JAN PALIČKA Received 3. 10. 2014 Abstract

More information

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand

More information

The Bus (PCI and PCI-Express)

The Bus (PCI and PCI-Express) 4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the

More information

SPI I2C LIN Ethernet. u Today: Wired embedded networks. u Next lecture: CAN bus u Then: 802.15.4 wireless embedded network

SPI I2C LIN Ethernet. u Today: Wired embedded networks. u Next lecture: CAN bus u Then: 802.15.4 wireless embedded network u Today: Wired embedded networks Ø Characteristics and requirements Ø Some embedded LANs SPI I2C LIN Ethernet u Next lecture: CAN bus u Then: 802.15.4 wireless embedded network Network from a High End

More information

Performance Comparison of VMware and Xen Hypervisor on Guest OS

Performance Comparison of VMware and Xen Hypervisor on Guest OS ISSN: 2393-8528 Contents lists available at www.ijicse.in International Journal of Innovative Computer Science & Engineering Volume 2 Issue 3; July-August-2015; Page No. 56-60 Performance Comparison of

More information

Technical Bulletin. Arista LANZ Overview. Overview

Technical Bulletin. Arista LANZ Overview. Overview Technical Bulletin Arista LANZ Overview Overview Highlights: LANZ provides unparalleled visibility into congestion hotspots LANZ time stamping provides for precision historical trending for congestion

More information

Operating Systems 4 th Class

Operating Systems 4 th Class Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science

More information

SR-IOV Networking in Xen: Architecture, Design and Implementation Yaozu Dong, Zhao Yu and Greg Rose

SR-IOV Networking in Xen: Architecture, Design and Implementation Yaozu Dong, Zhao Yu and Greg Rose SR-IOV Networking in Xen: Architecture, Design and Implementation Yaozu Dong, Zhao Yu and Greg Rose Abstract. SR-IOV capable network devices offer the benefits of direct I/O throughput and reduced CPU

More information

On the Performance Isolation Across Virtual Network Adapters in Xen

On the Performance Isolation Across Virtual Network Adapters in Xen CLOUD COMPUTING 11 : The Second International Conference on Cloud Computing, GRIDs, and Virtualization On the Performance Isolation Across Virtual Network Adapters in Xen Blazej Adamczyk, Andrzej Chydzinski

More information

A hypervisor approach with real-time support to the MIPS M5150 processor

A hypervisor approach with real-time support to the MIPS M5150 processor ISQED Wednesday March 4, 2015 Session 5B A hypervisor approach with real-time support to the MIPS M5150 processor Authors: Samir Zampiva (samir.zampiva@acad.pucrs.br) Carlos Moratelli (carlos.moratelli@pucrs.br)

More information

Networking Virtualization Using FPGAs

Networking Virtualization Using FPGAs Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical and Computer Engineering University of Massachusetts,

More information

Requirements of Voice in an IP Internetwork

Requirements of Voice in an IP Internetwork Requirements of Voice in an IP Internetwork Real-Time Voice in a Best-Effort IP Internetwork This topic lists problems associated with implementation of real-time voice traffic in a best-effort IP internetwork.

More information

VMWARE WHITE PAPER 1

VMWARE WHITE PAPER 1 1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

More information

Introduction to the NI Real-Time Hypervisor

Introduction to the NI Real-Time Hypervisor Introduction to the NI Real-Time Hypervisor 1 Agenda 1) NI Real-Time Hypervisor overview 2) Basics of virtualization technology 3) Configuring and using Real-Time Hypervisor systems 4) Performance and

More information

Proteus, a hybrid Virtualization Platform for Embedded Systems

Proteus, a hybrid Virtualization Platform for Embedded Systems Proteus, a hybrid Virtualization Platform for Embedded Systems Dipl.-Inf. Daniel Baldin and Dipl.-Inf. Timo Kerstan Heinz-Nixdorf-Institute University of Paderborn 33102 Paderborn, Germany dbaldin@uni-paderborn.de

More information

Router Architectures

Router Architectures Router Architectures An overview of router architectures. Introduction What is a Packet Switch? Basic Architectural Components Some Example Packet Switches The Evolution of IP Routers 2 1 Router Components

More information

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Accelerating High-Speed Networking with Intel I/O Acceleration Technology White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing

More information

PCI Express Overview. And, by the way, they need to do it in less time.

PCI Express Overview. And, by the way, they need to do it in less time. PCI Express Overview Introduction This paper is intended to introduce design engineers, system architects and business managers to the PCI Express protocol and how this interconnect technology fits into

More information

Virtual Switching Without a Hypervisor for a More Secure Cloud

Virtual Switching Without a Hypervisor for a More Secure Cloud ing Without a for a More Secure Cloud Xin Jin Princeton University Joint work with Eric Keller(UPenn) and Jennifer Rexford(Princeton) 1 Public Cloud Infrastructure Cloud providers offer computing resources

More information

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer kklemperer@blackboard.com Agenda Session Length:

More information

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly

More information

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal

More information

How To Compare Performance Of A Router On A Hypervisor On A Linux Virtualbox 2.5 (Xen) To A Virtualbox 3.5.2 (Xeen) 2.2.5-Xen-Virtualization (X

How To Compare Performance Of A Router On A Hypervisor On A Linux Virtualbox 2.5 (Xen) To A Virtualbox 3.5.2 (Xeen) 2.2.5-Xen-Virtualization (X Performance Evaluation of Virtual Routers in Para-virtual Environment 1. Abhishek Bajaj abhishek.bajaj@iiitb.net 2. Anargha Biswas anargha.biswas@iiitb.net 3. Ambarish Kumar ambarish.kumar@iiitb.net 4.

More information

Three Key Design Considerations of IP Video Surveillance Systems

Three Key Design Considerations of IP Video Surveillance Systems Three Key Design Considerations of IP Video Surveillance Systems 2012 Moxa Inc. All rights reserved. Three Key Design Considerations of IP Video Surveillance Systems Copyright Notice 2012 Moxa Inc. All

More information

Knut Omang Ifi/Oracle 19 Oct, 2015

Knut Omang Ifi/Oracle 19 Oct, 2015 Software and hardware support for Network Virtualization Knut Omang Ifi/Oracle 19 Oct, 2015 Motivation Goal: Introduction to challenges in providing fast networking to virtual machines Prerequisites: What

More information

Implementation and Performance Evaluation of M-VIA on AceNIC Gigabit Ethernet Card

Implementation and Performance Evaluation of M-VIA on AceNIC Gigabit Ethernet Card Implementation and Performance Evaluation of M-VIA on AceNIC Gigabit Ethernet Card In-Su Yoon 1, Sang-Hwa Chung 1, Ben Lee 2, and Hyuk-Chul Kwon 1 1 Pusan National University School of Electrical and Computer

More information

Real-Time Virtualization How Crazy Are We?

Real-Time Virtualization How Crazy Are We? Siemens Corporate Technology October 2014 Real-Time Virtualization How Crazy Are We? Image: Marcus Quigmire, licensed under CC BY 2.0 Unrestricted Siemens AG 2014. All rights reserved Real-Time Systems

More information

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration White Paper By: Gary Gumanow 9 October 2007 SF-101233-TM Introduction With increased pressure on IT departments to do

More information

Benchmarking Virtual Switches in OPNFV draft-vsperf-bmwg-vswitch-opnfv-00. Maryam Tahhan Al Morton

Benchmarking Virtual Switches in OPNFV draft-vsperf-bmwg-vswitch-opnfv-00. Maryam Tahhan Al Morton Benchmarking Virtual Switches in OPNFV draft-vsperf-bmwg-vswitch-opnfv-00 Maryam Tahhan Al Morton Introduction Maryam Tahhan Network Software Engineer Intel Corporation (Shannon Ireland). VSPERF project

More information

Models For Modeling and Measuring the Performance of a Xen Virtual Server

Models For Modeling and Measuring the Performance of a Xen Virtual Server Measuring and Modeling the Performance of the Xen VMM Jie Lu, Lev Makhlis, Jianjiun Chen BMC Software Inc. Waltham, MA 2451 Server virtualization technology provides an alternative for server consolidation

More information

OpenFlow with Intel 82599. Voravit Tanyingyong, Markus Hidell, Peter Sjödin

OpenFlow with Intel 82599. Voravit Tanyingyong, Markus Hidell, Peter Sjödin OpenFlow with Intel 82599 Voravit Tanyingyong, Markus Hidell, Peter Sjödin Outline Background Goal Design Experiment and Evaluation Conclusion OpenFlow SW HW Open up commercial network hardware for experiment

More information

White Paper Increase Flexibility in Layer 2 Switches by Integrating Ethernet ASSP Functions Into FPGAs

White Paper Increase Flexibility in Layer 2 Switches by Integrating Ethernet ASSP Functions Into FPGAs White Paper Increase Flexibility in Layer 2 es by Integrating Ethernet ASSP Functions Into FPGAs Introduction A Layer 2 Ethernet switch connects multiple Ethernet LAN segments. Because each port on the

More information

Measuring Cache and Memory Latency and CPU to Memory Bandwidth

Measuring Cache and Memory Latency and CPU to Memory Bandwidth White Paper Joshua Ruggiero Computer Systems Engineer Intel Corporation Measuring Cache and Memory Latency and CPU to Memory Bandwidth For use with Intel Architecture December 2008 1 321074 Executive Summary

More information

High-Density Network Flow Monitoring

High-Density Network Flow Monitoring Petr Velan petr.velan@cesnet.cz High-Density Network Flow Monitoring IM2015 12 May 2015, Ottawa Motivation What is high-density flow monitoring? Monitor high traffic in as little rack units as possible

More information

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that

More information

Analysis of IP Network for different Quality of Service

Analysis of IP Network for different Quality of Service 2009 International Symposium on Computing, Communication, and Control (ISCCC 2009) Proc.of CSIT vol.1 (2011) (2011) IACSIT Press, Singapore Analysis of IP Network for different Quality of Service Ajith

More information

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,

More information

Mixed-Criticality Systems Based on Time- Triggered Ethernet with Multiple Ring Topologies. University of Siegen Mohammed Abuteir, Roman Obermaisser

Mixed-Criticality Systems Based on Time- Triggered Ethernet with Multiple Ring Topologies. University of Siegen Mohammed Abuteir, Roman Obermaisser Mixed-Criticality s Based on Time- Triggered Ethernet with Multiple Ring Topologies University of Siegen Mohammed Abuteir, Roman Obermaisser Mixed-Criticality s Need for mixed-criticality systems due to

More information

White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux

White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux White Paper Real-time Capabilities for Linux SGI REACT Real-Time for Linux Abstract This white paper describes the real-time capabilities provided by SGI REACT Real-Time for Linux. software. REACT enables

More information

Smart Queue Scheduling for QoS Spring 2001 Final Report

Smart Queue Scheduling for QoS Spring 2001 Final Report ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS Smart Queue Scheduling for QoS Spring 2001 Final Report By Haijing Fang(hfanga@sfu.ca) & Liu Tang(llt@sfu.ca)

More information

Experimental Investigation Decentralized IaaS Cloud Architecture Open Stack with CDT

Experimental Investigation Decentralized IaaS Cloud Architecture Open Stack with CDT Experimental Investigation Decentralized IaaS Cloud Architecture Open Stack with CDT S. Gobinath, S. Saravanan PG Scholar, CSE Dept, M.Kumarasamy College of Engineering, Karur, India 1 Assistant Professor,

More information

Architecture of distributed network processors: specifics of application in information security systems

Architecture of distributed network processors: specifics of application in information security systems Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia vlad@neva.ru 1. Introduction Modern

More information

Crossbow: From Hardware Virtualized NICs to Virtualized Networks

Crossbow: From Hardware Virtualized NICs to Virtualized Networks Crossbow: From Hardware Virtualized NICs to Virtualized Networks Sunay Tripathi sunay.tripathi@sun.com Nicolas Droux nicolas.droux@sun.com Kais Belgaied kais.belgaied@sun.com Solaris Kernel Networking

More information

Performance Tuning Guidelines for PowerExchange for Microsoft Dynamics CRM

Performance Tuning Guidelines for PowerExchange for Microsoft Dynamics CRM Performance Tuning Guidelines for PowerExchange for Microsoft Dynamics CRM 1993-2016 Informatica LLC. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying,

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

COS 318: Operating Systems. Virtual Machine Monitors

COS 318: Operating Systems. Virtual Machine Monitors COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have

More information

Programmable Networking with Open vswitch

Programmable Networking with Open vswitch Programmable Networking with Open vswitch Jesse Gross LinuxCon September, 2013 2009 VMware Inc. All rights reserved Background: The Evolution of Data Centers Virtualization has created data center workloads

More information

IxChariot Virtualization Performance Test Plan

IxChariot Virtualization Performance Test Plan WHITE PAPER IxChariot Virtualization Performance Test Plan Test Methodologies The following test plan gives a brief overview of the trend toward virtualization, and how IxChariot can be used to validate

More information

Adobe LiveCycle Data Services 3 Performance Brief

Adobe LiveCycle Data Services 3 Performance Brief Adobe LiveCycle ES2 Technical Guide Adobe LiveCycle Data Services 3 Performance Brief LiveCycle Data Services 3 is a scalable, high performance, J2EE based server designed to help Java enterprise developers

More information

Chapter 2 Quality of Service for I/O Workloads in Multicore Virtualized Servers

Chapter 2 Quality of Service for I/O Workloads in Multicore Virtualized Servers Chapter 2 Quality of Service for I/O Workloads in Multicore Virtualized Servers J. Lakshmi and S.K. Nandy Abstract Emerging trend of multicore servers promises to be the panacea for all data-center issues

More information

The Microsoft Windows Hypervisor High Level Architecture

The Microsoft Windows Hypervisor High Level Architecture The Microsoft Windows Hypervisor High Level Architecture September 21, 2007 Abstract The Microsoft Windows hypervisor brings new virtualization capabilities to the Windows Server operating system. Its

More information

Security Overview of the Integrity Virtual Machines Architecture

Security Overview of the Integrity Virtual Machines Architecture Security Overview of the Integrity Virtual Machines Architecture Introduction... 2 Integrity Virtual Machines Architecture... 2 Virtual Machine Host System... 2 Virtual Machine Control... 2 Scheduling

More information

Region 10 Videoconference Network (R10VN)

Region 10 Videoconference Network (R10VN) Region 10 Videoconference Network (R10VN) Network Considerations & Guidelines 1 What Causes A Poor Video Call? There are several factors that can affect a videoconference call. The two biggest culprits

More information

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software Local Area What s a LAN? A transmission system, usually private owned, very speedy and secure, covering a geographical area in the range of kilometres, comprising a shared transmission medium and a set

More information

A Platform Built for Server Virtualization: Cisco Unified Computing System

A Platform Built for Server Virtualization: Cisco Unified Computing System A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease

More information

AMD Opteron Quad-Core

AMD Opteron Quad-Core AMD Opteron Quad-Core a brief overview Daniele Magliozzi Politecnico di Milano Opteron Memory Architecture native quad-core design (four cores on a single die for more efficient data sharing) enhanced

More information

Configuring QoS. Understanding QoS CHAPTER

Configuring QoS. Understanding QoS CHAPTER 24 CHAPTER This chapter describes how to configure quality of service (QoS) by using standard QoS commands. With QoS, you can give preferential treatment to certain types of traffic at the expense of others.

More information

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...

More information

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

How To Monitor And Test An Ethernet Network On A Computer Or Network Card 3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel

More information

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc (International Journal of Computer Science & Management Studies) Vol. 17, Issue 01 Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc Dr. Khalid Hamid Bilal Khartoum, Sudan dr.khalidbilal@hotmail.com

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

This topic lists the key mechanisms use to implement QoS in an IP network.

This topic lists the key mechanisms use to implement QoS in an IP network. IP QoS Mechanisms QoS Mechanisms This topic lists the key mechanisms use to implement QoS in an IP network. QoS Mechanisms Classification: Each class-oriented QoS mechanism has to support some type of

More information

Optimizing Shared Resource Contention in HPC Clusters

Optimizing Shared Resource Contention in HPC Clusters Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs

More information

Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip

Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip Ms Lavanya Thunuguntla 1, Saritha Sapa 2 1 Associate Professor, Department of ECE, HITAM, Telangana

More information

H.323 Traffic Characterization Test Plan Draft Paul Schopis, pschopis@itecohio.org

H.323 Traffic Characterization Test Plan Draft Paul Schopis, pschopis@itecohio.org H.323 Traffic Characterization Test Plan Draft Paul Schopis, pschopis@itecohio.org I. Introduction Recent attempts at providing Quality of Service in the Internet2 community have focused primarily on Expedited

More information

Xen Live Migration. Networks and Distributed Systems Seminar, 24 April 2006. Matúš Harvan Xen Live Migration 1

Xen Live Migration. Networks and Distributed Systems Seminar, 24 April 2006. Matúš Harvan Xen Live Migration 1 Xen Live Migration Matúš Harvan Networks and Distributed Systems Seminar, 24 April 2006 Matúš Harvan Xen Live Migration 1 Outline 1 Xen Overview 2 Live migration General Memory, Network, Storage Migration

More information

How To Provide Qos Based Routing In The Internet

How To Provide Qos Based Routing In The Internet CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this

More information

Architecture of the Kernel-based Virtual Machine (KVM)

Architecture of the Kernel-based Virtual Machine (KVM) Corporate Technology Architecture of the Kernel-based Virtual Machine (KVM) Jan Kiszka, Siemens AG, CT T DE IT 1 Corporate Competence Center Embedded Linux jan.kiszka@siemens.com Copyright Siemens AG 2010.

More information

Contents. Connection Guide. What is Dante?... 2. Connections... 4. Network Set Up... 6. System Examples... 9. Copyright 2015 ROLAND CORPORATION

Contents. Connection Guide. What is Dante?... 2. Connections... 4. Network Set Up... 6. System Examples... 9. Copyright 2015 ROLAND CORPORATION Contents What is Dante?............................................. 2 Outline.................................................. 2 Fundamental............................................ 3 Required Network

More information