Bullet Trains: A Study of NIC Burst Behavior at Microsecond Timescales
|
|
|
- Amy Berry
- 10 years ago
- Views:
Transcription
1 Bullet Trains: A Study of NIC Burst Behavior at Microsecond Timescales Rishi Kapoor, Alex C. Snoeren, Geoffrey M. Voelker, George Porter University of California, San Diego Abstract While numerous studies have examined the macro-level behavior of traffic in data center networks overall flow sizes, destination variability, and TCP burstiness little information is available on the behavior of data center traffic at packet-level timescales. Whereas one might assume that flows from different applications fairly share available link bandwidth, and that packets within a single flow are uniformly paced, the reality is more complex. To meet increasingly high link rates of 10 Gbps and beyond, batching is typically introduced across the network stack at the application, middleware, OS, transport, and NIC layers. This batching results in short-term packet bursts, which have implications for the design and performance requirements of packet processing devices along the path, including middleboxes, SDN-enabled switches, and virtual machine hypervisors. In this paper, we study the burst behavior of traffic emanating from a 10-Gbps end host across a variety of data center applications. We find that at microsecond timescales, the traffic exhibits large bursts (i.e., 10s of packets in length). We further find that this level of burstiness is largely outside of application control, and independent of the high-level behavior of applications. 1. INTRODUCTION As an increasing array of sophisticated applications move to the cloud, Internet data centers must adapt to meet their needs. Unlike software installed on a single machine, data center applications are spread across potentially thousands of hosts, resulting in increasingly stringent requirements placed on the underlying network interconnect. Data center networks must isolate different tenants and applications from each other, provide low-latency access to data spread across the cluster, and deliver high bisection bandwidth, all despite omnipresent component failures. Administrators deploy a variety of packet processing technologies to achieve these goals, spanning all layers of the network stack. Virtual machine hypervisors implement complex virtual switched networks [18], ferrying packets between VMs and the rest of the data center. Individual data flows are forwarded through a variety of middleboxes [23], which process data packets to imple- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CoNEXT 13, December , Santa Barbara, CA, USA. Copyright 2013 ACM /13/12...$ ment a variety of value-added services. Most recently, the adoption of software-defined networking (SDN) means that software controllers are involved in providing basic connectivity services within the network a task previously assigned to custom, highperformance hardware devices [17]. The result of each of these trends is that more devices, running a mixture of software and hardware, sit on the data plane, handling packets. At the same time, link rates continue to increase, first from 1 Gbps to 10 Gbps, and now 10 Gbps to 40 Gbps, 100 Gbps [1], and beyond. Given the increasing speed of network links, and increasing complexity of packet processing tasks, handling these data flows is a growing challenge [23]. Designing efficient and performant packet processing devices, either in software or hardware, requires a deep understanding of the traffic that will transit them. Each of these devices has to carefully manage internal resources, including TCAM entries, flow entry caches, and CPU resources, all of which are affected by the arrival pattern of packets. In their seminal work, Jain and Routhier studied the behavior of traffic transiting Ethernet networks at a packetlevel granularity [12], and found that the traffic exhibited a number of unexpected behaviors, including the formation of trains which are sets of packets, closely grouped in time, from a single source to a particular destination. Since this analysis was undertaken, networks have evolved considerably. For example, instead of 10-Mbps shared-medium Ethernet, modern data center networks rely on 10- Gbps switched Ethernet. In this work, we analyze the output of 10-Gbps servers running a variety of data center workloads. We define a burst to be an uninterrupted sequential stream of packets from one source to one destination with an inter-packet spacing of less than 40 ns, which is four times the interframe gap (IFG) for 10-Gbps Ethernet. We find that the traffic exhibits bursty train type behavior, similar to Jain and Routhier s observations from nearly three decades ago. However, we observe several differences. First, there are new causes of these bursts, including in-nic mechanisms designed to support higher link rates, such as TCP segmentation offloading. Second, in an effort to reduce CPU load, a number of data center applications rely on batching at various levels, which results in corresponding bursts of packets. Finally, how an application invokes standard OS system calls and APIs has a significant impact on the burstiness of the resulting traffic. Taken together, our aim is to better understand the network dynamics of modern data center servers and applications, and through that information, to enable more efficient packet processing across the network stack. 2. RELATED WORK Nearly thirty years ago, Jain and Routhier [12] analyzed the behavior of shared bus Ethernet traffic at MIT, and found that such traffic did not exhibit a Poisson distribution. Instead, they found
2 Batching source Examples Reason for batching Application Streaming Services (Youtube, Netflix) Large send sizes Transport TCP ACK compression, window scaling OS GSO/GRO Efficient CPU & memory bus utilization NIC TSO/LRO Reduce per packet head Disk drive Splice syscall, disk read-ahead Maximize disk throughput Table 1: Sources of network bursts. that packets followed a train model. In the train model, packets exhibit strong temporal and spacial locality, and hence many packets from a source are sent to the same destination back-to-back. In this model packets arriving within a parameterized inter-arrival time are called cars. If the inter-arrival time of packets/cars is higher than a threshold, then the packets are said to be a part of different trains. The inter-train time denotes the frequency at which applications initiate new transfers over the network, whereas the inter-car time reflects the delay added by the generating process and operating system, as well as CPU contention and NIC overheads. In this paper, we re-evaluate modern data center networks to look for the existence and cause of bursts/trains (two terms that we will use interchangeably). A variety of studies investigate the presence of burstiness in deployed networks. In the data center, Benson et al. [6] examine the behavior of a variety of real-world workloads, noting that traffic is often bursty at short time scales, and exhibits an ON/OFF behavior. At the transport layer, the widely used TCP protocol exhibits, and is the source of, a considerable amount of traffic bursts [2, 4, 7, 13]. Aggarwal et al. [2] and Jiang et al. [13] identify aspects of the mechanics of TCP, including slow start, loss recovery, ACK compression, ACK reordering, unused congestion window space, and bursty API calls, as the sources of these overall traffic bursts. Previous studies in the wide area include Jiang et al. [14], who examined traffic bursts in the Internet and found TCP s self-clocking nature and in-path queuing to result in ON/OFF traffic patterns. In this paper, we identify the causes of bursty traffic across the stack in a data center setting. 3. SOURCES OF TRAFFIC BURSTS We now highlight the different layers of the network stack that account for bursts, including the application, the transport protocol, the operating system, and the underlying hardware (summarized in Table 1). Then, in Section 4, we present a detailed analysis of the resulting burst behavior of data center applications. 3.1 Application Ultimately, applications are responsible for introducing data into the network. Depending on the semantics of each individual application, data may be introduced in large chunks or in smaller increments. For real-time workloads like the streaming services YouTube and Netflix, server processes might write data into socket buffers in fine-grained, ON/OFF patterns. For example, video streaming servers often write 64 KB to 2 MB of data per connection [11, 21]. On the other hand, many Big Data data center applications are limited by the amount of data they can transfer per unit time. Distributed file systems [5, 10] and bulk MapReduce workloads [8, 22] maximize disk throughput by reading and writing files in large blocks. Because of the large reads these systems require, data is sent to the network in large chunks, potentially resulting in bursty traffic due to transport mechanisms, as described in the next subsection. NFS performs similar optimizations by sending data to the client in large chunks, in transfer sizes specified by the client. 3.2 Transport The TCP transport protocol has long been identified as a source of traffic bursts, as described in Section 2. Beyond the basic protocol, some improvements have an impact on its resulting burst behavior. One such example is TCP window scaling (introduced, e.g., in Linux 2.6), in which the TCP window representation is extended from 16-bits to 32-bits. The resulting increase in the potential window allows unacknowledged data to grow as large as 1 GB, resulting in potentially large traffic bursts. 3.3 Operating system The operating system plays a large role in determining the severity of traffic bursts though a number of mechanisms. First, the API call boundary between the application and OS can result in bursty traffic. Second, a variety of OS-level parameters affect the transport and NIC hardware behavior, including maximum and minimum window sizes, window scaling parameters, and interrupt coalescing settings. Finally, the in-kernel mechanisms Generalized Receive Offload (GRO) and Generalized Segmentation Offload (GSO) can be used to reduce CPU utilization, at the expense of potentially increased burstiness. GSO sends large virtual packets down the network stack and divides them into MTU-sized packets just before handing the data to the NIC. GRO coalesces a number of smaller packets into a single, larger packet in the OS. Instead of a large number of smaller packets being passed through the network stack (TCP/IP), a single large packet is passed. 3.4 Hardware The behavior of hardware on either the sender or receiver affects the burstiness of network traffic. We now review a few of these sources. Disk drives: To maximize disk throughput, the OS and disk controller implement a variety of optimizations to amortize read operations over relatively slow disk seeks, such as read-ahead caches and request reordering (e.g., via the elevator algorithm). Of particular interest is the interaction between the disk and the splice() or sendfile() system calls. These calls improve overall performance by offloading data transfer between storage devices and the network to the OS, without incurring data copies to userspace. The OS divides these system call requests into individual data transfers. In Linux, these data transfers are the size of the disk read-ahead, which are transferred to the socket in batches and result in a train of packets. The read-ahead parameter is a configurable OS parameter with a default value of 128 KB. Segmentation offload: TCP Segmentation Offload (TSO) is a feature of the NIC that enables the OS to send large virtual packets to the NIC, which then produces numerous MTU-sized packets. Typical TSO packet sizes are 64 KB. The benefit of this approach is that large data transfers reduces per-packet overhead (e.g., interrupt processing), thereby reducing CPU overhead.
3
4
5
6 and More, and recently, Sinha et al. [24] exploit the burstiness of TCP to schedule flowlets on multiple paths through the network. Wischik finds that traffic bursts can potentially inform the sizing of buffers in routers and gateways along the path [26]. Recently, Vattikonda et al. [25] proposed overlaying a data center network with a TDMA-based link arbitration scheme, which replaces Ethernet s shared medium model. With TDMA support, each host would periodically have exclusive access to the link. To maximally utilize that resource, each host would ideally send a burst of data. Porter et al. [19] rely on a similar TDMA scheme to implement an inter-tor optical circuit switched interconnect. 6. ACKNOWLEDGMENTS We would like to thank our shepherd, Thomas Karagiannis, and the anonymous reviewers of this work for their valuable feedback. This work was supported in part by grants from the National Science Foundation (CNS and CNS ) and by generous research, operational and/or in-kind support from Microsoft, Google, and the UCSD Center for Networked Systems (CNS). 7. REFERENCES [1] 100Gb/s Ethernet Task Force. [2] A. Aggarwal, S. Savage, and T. E. Anderson. Understanding the Performance of TCP Pacing. In Proc. INFOCOM, [3] M. Alizadeh, A. Kabbani, T. Edsall, B. Prabhakar, A. Vahdat, and M. Yasuda. Less is More: Trading a little Bandwidth for Ultra-Low Latency in the Data Center. In Proc. NSDI, [4] M. Allman and E. Blanton. Notes on Burst Mitigation for Transport Protocols. SIGCOMM Comput. Commun. Rev., 35(2):53 60, Apr [5] Apache Software Foundation. HDFS Architecture Guide. hdfs_design.html. [6] T. Benson, A. Akella, and D. A. Maltz. Network Traffic Characteristics of Data Centers in the Wild. In Proc. IMC, [7] E. Blanton and M. Allman. On the Impact of Bursting on TCP Performance. In Proc. PAM, [8] J. Dean and S. Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. In Proc. OSDI, [9] N. Farrington, G. Porter, S. Radhakrishnan, H. H. Bazzaz, V. Subramanya, Y. Fainman, G. Papen, and A. Vahdat. Helios: A Hybrid Electrical/Optical Switch Architecture for Modular Data Centers. In Proc. ACM SIGCOMM, Aug [10] S. Ghemawat, H. Gobioff, and S.-T. Leung. The Google File System. In Proc. SOSP, [11] M. Ghobadi, Y. Cheng, A. Jain, and M. Mathis. Trickle: Rate Limiting YouTube Video Streaming. In Proc. ATC, [12] R. Jain and S. Routhier. Packet Trains Measurements and a New Model for Computer Network Traffic. IEEE J.Sel. A. Commun., 4(6): , Sept [13] H. Jiang and C. Dovrolis. Source-level IP Packet Bursts: Causes and Effects. In Proc. IMC, [14] H. Jiang and C. Dovrolis. Why is the Internet Traffic Bursty in Short Time Scales? In Proc. SIGMETRICS, [15] M. Moshref, M. Yu, A. Sharma, and R. Govindan. Scalable rule management for data centers. In Proceedings of the 10th USENIX conference on Networked Systems Design and Implementation, nsdi 13, pages , Berkeley, CA, USA, USENIX Association. [16] R. N. Mysore, G. Porter, Subramanya, and A. Vahdat. FasTrak: Enabling Express Lanes in Multi-Tenant Data Centers. In Proc. CoNEXT, [17] ONF. Software-Defined Networking: The New Norm for Networks. [18] Open vswitch: An Open Virtual Switch. [19] G. Porter, R. Strong, N. Farrington, A. Forencich, P.-C. Sun, T. Rosing, Y. Fainman, G. Papen, and A. Vahdat. Integrating Microsecond Circuit Switching into the Data Center. In Proc. SIGCOMM, [20] R. Prasad, M. Jain, and C. Dovrolis. Effects of Interrupt Coalescence on Network Measurements. In Proc. PAM, [21] A. Rao, A. Legout, Y.-s. Lim, D. Towsley, C. Barakat, and W. Dabbous. Network Characteristics of Video Streaming Traffic. In Proc. CoNEXT, [22] A. Rasmussen, G. Porter, M. Conley, H. V. Madhyastha, R. N. Mysore, A. Pucher, and A. Vahdat. TritonSort: A Balanced Large-Scale Sorting System. In Proc. NSDI, [23] V. Sekar, N. Egi, S. Ratnasamy, M. K. Reiter,, and G. Shi. Design and Implementation of a Consolidated Middlebox Architecture. In Proc. NSDI, [24] S. Sinha, S. Kandula, and D. Katabi. Harnessing TCPs Burstiness using Flowlet Switching. In Proc. HotNets, [25] B. Vattikonda, G. Porter, A. Vahdat, and A. C. Snoeren. Practical TDMA for Datacenter Ethernet. In Proc. ACM EuroSys, [26] D. Wischik. Buffer Sizing Theory for Bursty TCP Flows. In Communications, 2006 International Zurich Seminar on, pages , [27] T. Yoshino, Y. Sugawara, K. Inagami, J. Tamatsukuri, M. Inaba, and K. Hiraki. Performance Optimization of TCP/IP over 10 Gigabit Ethernet by Precise Instrumentation. In Proc. SC, [28] L. Zhang, S. Shenker, and D. D. Clark. Observations on the Dynamics of a Congestion Control Algorithm: The Effects of Two-Way Traffic. In Proc. SIGCOMM, 1991.
Data Center Load Balancing. 11.11.2015 Kristian Hartikainen
Data Center Load Balancing 11.11.2015 Kristian Hartikainen Load Balancing in Computing Efficient distribution of the workload across the available computing resources Distributing computation over multiple
A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network
ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
NicPic: Scalable and Accurate End-Host Rate Limiting
NicPic: Scalable and Accurate End-Host Rate Limiting Sivasankar Radhakrishnan, Vimalkumar Jeyakumar +, Abdul Kabbani, George Porter, Amin Vahdat University of California, San Diego + Stanford University
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to
Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
T. S. Eugene Ng Rice University
T. S. Eugene Ng Rice University Guohui Wang, David Andersen, Michael Kaminsky, Konstantina Papagiannaki, Eugene Ng, Michael Kozuch, Michael Ryan, "c-through: Part-time Optics in Data Centers, SIGCOMM'10
Data Centers and Cloud Computing. Data Centers
Data Centers and Cloud Computing Slides courtesy of Tim Wood 1 Data Centers Large server and storage farms 1000s of servers Many TBs or PBs of data Used by Enterprises for server applications Internet
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Snapshots in Hadoop Distributed File System
Snapshots in Hadoop Distributed File System Sameer Agarwal UC Berkeley Dhruba Borthakur Facebook Inc. Ion Stoica UC Berkeley Abstract The ability to take snapshots is an essential functionality of any
Effects of Interrupt Coalescence on Network Measurements
Effects of Interrupt Coalescence on Network Measurements Ravi Prasad, Manish Jain, and Constantinos Dovrolis College of Computing, Georgia Tech., USA ravi,jain,[email protected] Abstract. Several
Gigabit Ethernet Design
Gigabit Ethernet Design Laura Jeanne Knapp Network Consultant 1-919-254-8801 [email protected] www.lauraknapp.com Tom Hadley Network Consultant 1-919-301-3052 [email protected] HSEdes_ 010 ed and
Data Center Network Structure using Hybrid Optoelectronic Routers
Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp
Datacenter Operating Systems
Datacenter Operating Systems CSE451 Simon Peter With thanks to Timothy Roscoe (ETH Zurich) Autumn 2015 This Lecture What s a datacenter Why datacenters Types of datacenters Hyperscale datacenters Major
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
PRODUCTS & TECHNOLOGY
PRODUCTS & TECHNOLOGY DATA CENTER CLASS WAN OPTIMIZATION Today s major IT initiatives all have one thing in common: they require a well performing Wide Area Network (WAN). However, many enterprise WANs
Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case.
Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case. Franco Callegati, Walter Cerroni, Chiara Contoli, Giuliano Santandrea Dept. of Electrical, Electronic and Information
Transparent and Flexible Network Management for Big Data Processing in the Cloud
Transparent and Flexible Network Management for Big Data Processing in the Cloud Anupam Das, Cristian Lumezanu, Yueping Zhang, Vishal Singh, Guofei Jiang, Curtis Yu UIUC NEC Labs UC Riverside Abstract
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Boosting Data Transfer with TCP Offload Engine Technology
Boosting Data Transfer with TCP Offload Engine Technology on Ninth-Generation Dell PowerEdge Servers TCP/IP Offload Engine () technology makes its debut in the ninth generation of Dell PowerEdge servers,
Stingray Traffic Manager Sizing Guide
STINGRAY TRAFFIC MANAGER SIZING GUIDE 1 Stingray Traffic Manager Sizing Guide Stingray Traffic Manager version 8.0, December 2011. For internal and partner use. Introduction The performance of Stingray
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Switching Architectures for Cloud Network Designs
Overview Networks today require predictable performance and are much more aware of application flows than traditional networks with static addressing of devices. Enterprise networks in the past were designed
Delivering Scale Out Data Center Networking with Optics Why and How. Amin Vahdat Google/UC San Diego [email protected]
Delivering Scale Out Data Center Networking with Optics Why and How Amin Vahdat Google/UC San Diego [email protected] Vignette 1: Explosion in Data Center Buildout Motivation Blueprints for 200k sq. ft.
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Cisco Bandwidth Quality Manager 3.1
Cisco Bandwidth Quality Manager 3.1 Product Overview Providing the required quality of service (QoS) to applications on a wide-area access network consistently and reliably is increasingly becoming a challenge.
SOFTWARE-DEFINED NETWORKING AND OPENFLOW
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Freddie Örnebjär TREX Workshop 2012 2012 Brocade Communications Systems, Inc. 2012/09/14 Software-Defined Networking (SDN): Fundamental Control
How To Improve Performance On A Linux Based Router
Linux Based Router Over 10GE LAN Cheng Cui, Chui-hui Chiu, and Lin Xue Department of Computer Science Louisiana State University, LA USA Abstract High speed routing with 10Gbps link speed is still very
Small is Better: Avoiding Latency Traps in Virtualized DataCenters
Small is Better: Avoiding Latency Traps in Virtualized DataCenters SOCC 2013 Yunjing Xu, Michael Bailey, Brian Noble, Farnam Jahanian University of Michigan 1 Outline Introduction Related Work Source of
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
A Coordinated. Enterprise Networks Software Defined. and Application Fluent Programmable Networks
A Coordinated Virtual Infrastructure for SDN in Enterprise Networks Software Defined Networking (SDN), OpenFlow and Application Fluent Programmable Networks Strategic White Paper Increasing agility and
PERFORMANCE TUNING ORACLE RAC ON LINUX
PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database
Facility Usage Scenarios
Facility Usage Scenarios GDD-06-41 GENI: Global Environment for Network Innovations December 22, 2006 Status: Draft (Version 0.1) Note to the reader: this document is a work in progress and continues to
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment
The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment
Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB-02499-001_v02
Technical Brief DualNet with Teaming Advanced Networking October 2006 TB-02499-001_v02 Table of Contents DualNet with Teaming...3 What Is DualNet?...3 Teaming...5 TCP/IP Acceleration...7 Home Gateway...9
Leveraging NIC Technology to Improve Network Performance in VMware vsphere
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect [email protected] Validating if the workload generated by the load generating tools is applied
ConnectX -3 Pro: Solving the NVGRE Performance Challenge
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
10Gb Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Latency-Sensitive Applications
10Gb Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Latency-Sensitive Applications Testing conducted by Solarflare and Arista Networks reveals single-digit
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308 Laura Knapp WW Business Consultant [email protected] Applied Expert Systems, Inc. 2011 1 Background
Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data
White Paper Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data What You Will Learn Financial market technology is advancing at a rapid pace. The integration of
Network-Aware Scheduling of MapReduce Framework on Distributed Clusters over High Speed Networks
Network-Aware Scheduling of MapReduce Framework on Distributed Clusters over High Speed Networks Praveenkumar Kondikoppa, Chui-Hui Chiu, Cheng Cui, Lin Xue and Seung-Jong Park Department of Computer Science,
Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters
Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
MAPS: Adaptive Path Selection for Multipath Transport Protocols in the Internet
MAPS: Adaptive Path Selection for Multipath Transport Protocols in the Internet TR-11-09 Yu Chen Xin Wu Xiaowei Yang Department of Computer Science, Duke University {yuchen, xinwu, xwy}@cs.duke.edu ABSTRACT
VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure
W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
WanVelocity. WAN Optimization & Acceleration
WanVelocity D A T A S H E E T WAN Optimization & Acceleration WanVelocity significantly accelerates applications while reducing bandwidth costs using a combination of application acceleration, network
II. BANDWIDTH MEASUREMENT MODELS AND TOOLS
FEAT: Improving Accuracy in End-to-end Available Bandwidth Measurement Qiang Wang and Liang Cheng Laboratory Of Networking Group (LONGLAB, http://long.cse.lehigh.edu) Department of Computer Science and
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
Understanding Data Locality in VMware Virtual SAN
Understanding Data Locality in VMware Virtual SAN July 2014 Edition T E C H N I C A L M A R K E T I N G D O C U M E N T A T I O N Table of Contents Introduction... 2 Virtual SAN Design Goals... 3 Data
Empowering Software Defined Network Controller with Packet-Level Information
Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK SOFTWARE DEFINED NETWORKING A NEW ARCHETYPE PARNAL P. PAWADE 1, ANIKET A. KATHALKAR
Hadoop Technology for Flow Analysis of the Internet Traffic
Hadoop Technology for Flow Analysis of the Internet Traffic Rakshitha Kiran P PG Scholar, Dept. of C.S, Shree Devi Institute of Technology, Mangalore, Karnataka, India ABSTRACT: Flow analysis of the internet
Software-Defined Networks Powered by VellOS
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
Getting More Performance and Efficiency in the Application Delivery Network
SOLUTION BRIEF Intel Xeon Processor E5-2600 v2 Product Family Intel Solid-State Drives (Intel SSD) F5* Networks Delivery Controllers (ADCs) Networking and Communications Getting More Performance and Efficiency
VXLAN Performance Evaluation on VMware vsphere 5.1
VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5
High-Performance Automated Trading Network Architectures
High-Performance Automated Trading Network Architectures Performance Without Loss Performance When It Counts Introduction Firms in the automated trading business recognize that having a low-latency infrastructure
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Windows 8 SMB 2.2 File Sharing Performance
Windows 8 SMB 2.2 File Sharing Performance Abstract This paper provides a preliminary analysis of the performance capabilities of the Server Message Block (SMB) 2.2 file sharing protocol with 10 gigabit
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
A Survey of Enterprise Middlebox Deployments
A Survey of Enterprise Middlebox Deployments Justine Sherry Sylvia Ratnasamy Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-202-24 http://www.eecs.berkeley.edu/pubs/techrpts/202/eecs-202-24.html
WEB SERVER PERFORMANCE WITH CUBIC AND COMPOUND TCP
WEB SERVER PERFORMANCE WITH CUBIC AND COMPOUND TCP Alae Loukili, Alexander Wijesinha, Ramesh K. Karne, and Anthony K. Tsetse Towson University Department of Computer & Information Sciences Towson, MD 21252
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Adopting Software-Defined Networking in the Enterprise
IT@Intel White Paper Intel IT Data Center Efficiency Software-Defined Networking April 2014 Adopting Software-Defined Networking in the Enterprise Executive Overview By virtualizing the network through
Challenges of Sending Large Files Over Public Internet
Challenges of Sending Large Files Over Public Internet CLICK TO EDIT MASTER TITLE STYLE JONATHAN SOLOMON SENIOR SALES & SYSTEM ENGINEER, ASPERA, INC. CLICK TO EDIT MASTER SUBTITLE STYLE OUTLINE Ø Setting
BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out
BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out Contents Introduction... 3 Terminology... 3 Planning Scale-Out Clusters and Pools... 3 Cluster Arrays Based on Management Boundaries...
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Network Performance Comparison of Multiple Virtual Machines
Network Performance Comparison of Multiple Virtual Machines Alexander Bogdanov 1 1 Institute forhigh-performance computing and the integrated systems, e-mail: [email protected], Saint-Petersburg, Russia
Technical Bulletin. Arista LANZ Overview. Overview
Technical Bulletin Arista LANZ Overview Overview Highlights: LANZ provides unparalleled visibility into congestion hotspots LANZ time stamping provides for precision historical trending for congestion
VMware Virtual SAN 6.2 Network Design Guide
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
State of the Art Cloud Infrastructure
State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks
Improving Flow Completion Time for Short Flows in Datacenter Networks
Improving Flow Completion Time for Short Flows in Datacenter Networks Sijo Joy, Amiya Nayak School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada {sjoy028, nayak}@uottawa.ca
Software Defined Networking
Software Defined Networking Richard T. B. Ma School of Computing National University of Singapore Material from: Scott Shenker (UC Berkeley), Nick McKeown (Stanford), Jennifer Rexford (Princeton) CS 4226:
Towards Efficient Load Distribution in Big Data Cloud
Towards Efficient Load Distribution in Big Data Cloud Zhi Liu *, Xiang Wang *, Weishen Pan *, Baohua Yang, Xiaohe Hu * and Jun Li * Department of Automation, Tsinghua University, Beijing, China Research
Router Architectures
Router Architectures An overview of router architectures. Introduction What is a Packet Switch? Basic Architectural Components Some Example Packet Switches The Evolution of IP Routers 2 1 Router Components
Network Performance Monitoring at Small Time Scales
Network Performance Monitoring at Small Time Scales Konstantina Papagiannaki, Rene Cruz, Christophe Diot Sprint ATL Burlingame, CA [email protected] Electrical and Computer Engineering Department University
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista
Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista Setting the Stage This presentation will discuss the usage of Linux as a base component of hypervisor components
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments Aryan TaheriMonfared Department of Electrical Engineering and Computer Science University of Stavanger
High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features
UDC 621.395.31:681.3 High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features VTsuneo Katsuyama VAkira Hakata VMasafumi Katoh VAkira Takeyama (Manuscript received February 27, 2001)
Cloud Monitoring Framework
Cloud Monitoring Framework Hitesh Khandelwal, Ramana Rao Kompella, Rama Ramasubramanian Email: {hitesh, kompella}@cs.purdue.edu, [email protected] December 8, 2010 1 Introduction With the increasing popularity
100 Gigabit Ethernet is Here!
100 Gigabit Ethernet is Here! Introduction Ethernet technology has come a long way since its humble beginning in 1973 at Xerox PARC. With each subsequent iteration, there has been a lag between time of
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
Definition of a White Box. Benefits of White Boxes
Smart Network Processing for White Boxes Sandeep Shah Director, Systems Architecture EZchip Technologies [email protected] Linley Carrier Conference June 10-11, 2014 Santa Clara, CA 1 EZchip Overview
