TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to
|
|
|
- Sharon Sullivan
- 10 years ago
- Views:
Transcription
1 Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance. This article introduces the concept of TOEs, explains how TOEs interact with the TCP/IP stack, and discusses potential performance advantages of implementing TOEs on specialized network adapters versus processing the standard TCP/IP protocol suite on the host CPU. BY SANDHYA SENAPATHI AND RICH HERNANDEZ As network interconnect speeds advance to Gigabit Ethernet 1 and 10 Gigabit Ethernet, 2 host processors can become a bottleneck in high-speed computing often requiring more CPU cycles to process the TCP/IP protocol stack than the business-critical applications they are running. As network speed increases, so does the performance degradation incurred by the corresponding increase in TCP/IP overhead. The performance degradation problem can be particularly severe in Internet SCSI (iscsi) based applications, which use IP to transfer storage block I/O data over the network. By carrying SCSI commands over IP networks, iscsi facilitates both intranet data transfers and long-distance storage management. To improve data-transfer performance over IP networks, the TCP Offload Engine (TOE) model can relieve the host CPU from the overhead of processing TCP/IP. TOEs allow the operating system (OS) to move all TCP/IP traffic to specialized hardware on the network adapter while leaving TCP/IP control decisions to the host server. By relieving the host processor bottleneck, TOEs can help deliver the performance benefits administrators expect from iscsi-based applications running across highspeed network links. By facilitating file I/O traffic, TOEs also can improve the performance of network attached storage (NAS). Moreover, TOEs are cost-effective because they can process the TCP/IP protocol stack on a high-speed network device that requires less processing power than a high-performance host CPU. This article provides a high-level overview of the advantages and inherent drawbacks to TCP/IP, explaining existing mechanisms to overcome the limitations of this protocol suite. In addition, TOE-based implementations and potential performance benefits of using TOEs instead of standard TCP/IP are described. TCP/IP helps ensure reliable, in-order data delivery Currently the de facto standard for internetwork data transmission, the TCP/IP protocol suite is used to transmit information over local area networks (LANs), wide area networks (WANs), and the Internet. TCP/IP 1 This term does not connote an actual operating speed of 1 Gbps. For high-speed transmission, connection to a Gigabit Ethernet server and network infrastructure is required. 2 This term does not connote an actual operating speed of 10 Gbps. For high-speed transmission, connection to a 10 Gigabit Ethernet (10GbE) server and network infrastructure is required. POWER SOLUTIONS 103
2 As network speed increases, so does the performance degradation incurred by the corresponding increase in TCP/IP overhead. processes can be conceptualized as layers in a hierarchical stack; each layer builds upon the layer below it, providing additional functionality. The layers most relevant to TOEs are the IP layer and the TCP layer (see Figure 1). The IP layer serves two purposes: the first is to transmit packets between LANs and WANs through the routing process; the second is to maintain a homogeneous interface to different physical networks. IP is a connectionless protocol, meaning that each transmitted packet is treated as a separate entity. In reality, each network packet belongs to a certain data stream, and each data stream belongs to a particular host application. The TCP layer associates each network packet with the appropriate data stream and, in turn, the upper layers associate each data stream with its designated host application. Most Internet protocols, including FTP and HTTP, use TCP to transfer data. TCP is a connection-oriented protocol, meaning that two host systems must establish a session with each other before any data can be transferred between them. Whereas IP does not provide for error recovery that is, IP has the potential to lose packets, duplicate packets, delay packets, or deliver packets out of sequence TCP ensures that the host system receives all packets in order, without duplication. Because most Internet applications require reliable data that is delivered in order and in manageable quantities, TCP is a crucial element of the network protocol stack for WANs and LANs. Reliability. TCP uses the checksum error-detection scheme, which computes the number of set bits on packet headers as well as packet data to ensure that packets have not been corrupted during transmission. A TCP pseudoheader is included in the checksum computation to verify the IP source and destination addresses. In-order data delivery. Because packets that belong to a single TCP connection can arrive at the destination system via different routes, TCP incorporates a per-byte numbering mechanism. This scheme enables the TCP protocol to put packets that arrive at their destination out of sequence back into the order in which they were sent, before it delivers the packets to the host application. Flow control. TCP monitors the number of bytes that the source system can transmit without overwhelming the destination system with data. As the source system sends packets, the receiving system returns acknowledgments. TCP incorporates a sliding window mechanism to control congestion on the receiving end. That is, as the sender transmits packets to the receiver, the size of the window reduces; as the sender receives acknowledgments from the receiver, the size of the window increases. Multiplexing. TCP accommodates the flow from multiple senders by allowing different data streams to intermingle during transmission and receiving. It identifies individual data streams with a number called the TCP port, which associates each stream with its designated host application on the receiving end. Traditional methods to reduce TCP/IP overhead offer limited gains After an application sends data across a network, several datamovement and protocol-processing steps occur. These and other TCP activities consume critical host resources: The application writes the transmit data to the TCP/IP sockets interface for transmission in payload sizes ranging from 4 KB to 64 KB. The OS segments the data into maximum transmission unit (MTU) size packets, and then adds TCP/IP header information to each packet. The OS copies the data onto the network interface card (NIC) send queue. The NIC performs the direct memory access (DMA) transfer of each data packet from the TCP buffer space to the NIC, and interrupts CPU activities to indicate completion of the transfer. The two most popular methods to reduce the substantial CPU overhead that TCP/IP processing incurs are TCP/IP checksum offload and large send offload. Applications Operating system Hardware Upper-level protocols TCP IP Traditional NIC MAC PHY Standard TCP/IP stack Figure 1. Comparing standard TCP/IP and TOE-enabled TCP/IP stacks Upper-level protocols TOE adapter TCP IP MAC PHY TOE TCP/IP stack 104 POWER SOLUTIONS March 2004
3 TCP/IP checksum offload The TCP/IP checksum offload technique moves the calculation of the TCP and IP checksum packets from the host CPU to the network adapter. For the TCP checksum, the transport layer on the host calculates the TCP pseudoheader checksum and places this value in the checksum field, thus enabling the network adapter to calculate the correct TCP checksum without touching the IP header. However, this approach yields only a modest reduction in CPU utilization. Large send offload Large send offload (LSO), also known as TCP segmentation offload (TSO), frees the OS from the task of segmenting the application s transmit data into MTU-size chunks. Using LSO, TCP can transmit a chunk of data larger than the MTU size to the network adapter. The adapter driver then divides the data into MTU-size chunks and uses the prototype TCP and IP headers of the send buffer to create TCP/IP headers for each packet in preparation for transmission. LSO is an extremely useful technology to scale performance across multiple Gigabit Ethernet links, although it does so under certain conditions. The LSO technique is most efficient when transferring large messages. Also, because LSO is a stateless offload, it yields performance benefits only for traffic being sent; it offers no improvements for traffic being received. Although LSO can reduce CPU utilization by approximately half, this benefit can be realized only if the receiver s TCP window size is set to 64 KB. LSO has little effect on interrupt processing because it is a transmit-only offload. Methods such as TCP/IP checksum offload and LSO provide limited performance gains or are advantageous only under certain conditions. For example, LSO is less effective when transmitting several smaller-sized packages. Also, in environments where packets are frequently dropped and connections lost, connection setup and maintenance consume a significant proportion of the host s processing power. Methods like LSO would produce minimal performance improvements in such environments. TOEs reduce TCP overhead on the host processor In traditional TCP/IP implementations, every network transaction results in a series of host interrupts for various processes related to transmitting and receiving, such as send-packet segmentation and receive-packet processing. Alternatively, TOEs can delegate all processing related to sending and receiving packets to the network adapter leaving the host server s CPU more available for business applications. Because TOEs involve the host processor only once for every application network I/O, they significantly reduce the number of requests and acknowledgments that the host stack must process. Using traditional TCP/IP, the host server must process received packets and associate received packets with TCP connections, which means every received packet goes through multiple data copies from system buffers to user memory locations. Because a TOE-enabled network adapter can perform all protocol processing, the adapter can use zero-copy algorithms to copy data directly from the NIC buffers into application memory locations, without intermediate copies to system buffers. In this way, TOEs greatly reduce the three main causes of TCP/IP overhead CPU interrupt processing, memory copies, and protocol processing. CPU interrupt processing An application that generates a write to a remote host over a network produces a series of interrupts to segment the data into packets and process the incoming acknowledgments. Handling each interrupt creates a significant amount of context switching a type of multitasking that directs the focus of the host CPU from one process to another in this case, from the current application process to the OS kernel and back again. Although interrupt-processing aggregation techniques can help reduce the overhead, they do not reduce the event processing required to send packets. Additionally, every data transfer generates a series of data copies from the application data buffers to the system buffers, and from the system buffers to the network adapters. High-speed networks such as Gigabit Ethernet compel host CPUs to keep up with a larger number of packets. For 1500-byte packets, the host OS stack would need to process more than 83,000 packets per second, or a packet every 12 microseconds. Smaller packets put an even greater burden on the host CPU. TOE processing can enable a dramatic reduction in network transaction load. Using TOEs, the host CPU can process an entire application I/O transaction with one interrupt. Therefore, applications working with data sizes that are multiples of network packet sizes will benefit the most from TOEs. CPU interrupt processing can be reduced from thousands of interrupts to one or two per I/O transaction. Memory copies Standard NICs require that data be copied from the application user space to the OS kernel. The NIC driver then can copy the data To improve data-transfer performance over IP networks, the TCP Offload Engine model can relieve the host CPU from the overhead of processing TCP/IP. from the kernel to the on-board packet buffer. This requires multiple trips across the memory bus (see Figure 2): When packets are received from the network, the NIC copies the packets to the NIC buffers, which reside in host memory. Packets then are copied to the TCP buffer and, finally, to the application itself a total of three memory copies. A TOE-enabled NIC can reduce the number of buffer copies to two: The NIC copies POWER SOLUTIONS 105
4 Legend System memory Packet buffer Send path Application TCP buffer NIC Figure 2. Transmitting data across the memory bus using a standard NIC Application TCP buffer NIC buffer the packets to the TCP buffer and then to the application buffers. A TOE-enabled NIC using Remote Direct Memory Access (RDMA) can use zero-copy algorithms to place data directly into application buffers. The capability of RDMA to place data directly eliminates intermediate memory buffering and copying, as well as the associated demands on the memory and processor resources of the host server without requiring the addition of expensive buffer memory on the Ethernet adapter. RDMA also preserves memory-protection semantics. The RDMA over TCP/IP specification defines the interoperable protocols to support RDMA operations over standard TCP/IP networks. Protocol processing The traditional host OS resident stack must handle a large number of requests to process an application s 64 KB data block send request. Acknowledgments for the transmitted data also must be received and processed by the host stack. In addition, TCP is required to maintain state information for every data connection created. This state information includes data such as the current size and position of the windows for both sender and receiver. Every time a packet is received or sent, the position and size of the window change and TCP must record these changes. Protocol processing consumes more CPU power to receive packets than to send packets. A standard NIC must buffer received packets, and then notify the host system using interrupts. After a context switch to handle the interrupt, the host system processes the packet information so that the packets can be associated with an open TCP connection. Next, the TCP data must be correlated with the associated application and then the TCP data must be copied from system buffers into the application memory locations. TCP uses the checksum information in every packet that the IP layer sends to determine whether the packet is error-free. TCP also records an acknowledgment for every received packet. Each of these operations results in an interrupt call to the underlying OS. As a NIC Receive path result, the host CPU can be saturated by frequent interrupts and protocol processing overhead. The faster the network, the more protocol processing the CPU has to perform. In general, 1 MHz of CPU power is required to transmit 1 Mbps of data. To process 100 Mbps of data the speed at which Fast Ethernet operates 100 MHz of CPU computing power is required, which today s CPUs can handle without difficulty. However, bottlenecks can begin to occur when administrators introduce Gigabit Ethernet and 10 Gigabit Ethernet. At these network speeds, with so much CPU power devoted to TCP processing, relatively few cycles are available for application processing. Multi-homed hosts with multiple Gigabit Ethernet NICs compound the problem. Throughput does not scale linearly when utilizing multiple NICs in the same server because only one host TCP/IP stack processes all the traffic. In comparison, TOEs distribute network transaction processing across multiple TOE-enabled network adapters. TOEs provide options for optimal performance or flexibility Administrators can implement TOEs in several ways, as best suits performance and flexibility requirements. Both processor-based and chip-based methods exist. The processor-based approach provides the flexibility to add new features and use widely available components, while chip-based techniques offer excellent performance at a low cost. In addition, some TOE implementations offload processing completely while others do so partially. Processor-based versus chip-based implementations The implementation of TOEs in a standardized manner requires two components: network adapters that can handle TCP/IP processing operations, and extensions to the TCP/IP software stack that offload specified operations to the network adapter. Together, these components let the OS move all TCP/IP traffic to specialized, TOEenabled firmware designed into a TCP/IP-capable NIC while leaving TCP/IP control decisions with the host system. Processorbased methods also can use off-the-shelf network adapters that have a built-in processor and memory. However, processor-based methods are more expensive and still can create bottlenecks at 10 Gbps and beyond. The second component of a standardized TOE implementation comprises TOE extensions to the TCP/IP stack, which are completely transparent to the higher-layer protocols and applications that run on top of them. Applications interact the same way with a TOE-enabled stack as they would with a standard TCP/IP stack. This transparency makes the TOE approach attractive because it requires no changes to the numerous applications and higher-level protocols that already use TCP/IP as a base for network transmission. The chip-based implementation uses an application-specific integrated circuit (ASIC) that is designed into the network adapter. ASIC-based implementations can offer better performance than 106 POWER SOLUTIONS March 2004
5 off-the-shelf processor-based implementations because they are customized to perform the TCP offload. However, because ASICs are manufactured for a certain set of operations, adding new features may not always be possible. To offload specified operations to the network adapter, ASIC-based implementations require the same extensions to the TCP/IP software stack as processor-based implementations. Partial versus full offloading TOE implementations also can be differentiated by the amount of network links. processing that is offloaded to the network adapter. In situations where TCP connections are stable and packet drops infrequent, the highest amount of TCP processing is spent in data transmission and reception. Offloading just the processing related to transmission and reception is referred to as partial offloading. A partial, or data path, TOE implementation eliminates the host CPU overhead created by transmission and reception. However, the partial offloading method improves performance only in situations where TCP connections are created and held for a long time and errors and lost packets are infrequent. Partial offloading relies on the host stack to handle control that is, connection setup as well as exceptions. A partial TOE implementation does not handle the following: TCP connection setup Fragmented TCP segments Retransmission time-out Out-of-order segments By relieving the host processor bottleneck, TOEs can help deliver the performance benefits administrators expect from iscsi-based applications running across high-speed The host software uses dynamic and flexible algorithms to determine which connections to offload. This functionality requires an OS extension to enable hooks that bypass the normal stack and implement the offload heuristics. The system software has better information than the TOE-enabled NIC regarding the type of traffic it is handling, and thus makes offload decisions based on priority, protocol type, and level of activity. In addition, the host software is responsible for preventing denial of service (DoS) attacks. When administrators discover new attacks, they should upgrade the host software as required to handle the attack for both offloaded and non-offloaded connections. TCP/IP fragmentation is a rare event on today s networks and should not occur if applications and network components are working properly. A store-and-forward NIC saves all out-of-order packets in on-chip or external RAM so that it can reorder the packets before sending the data to the application. Therefore, a partial TOE implementation should not cause performance degradation because, given that current networks are reliable, TCP operates most of the time without experiencing exceptions. The process of offloading all the components of the TCP stack is called full offloading. With a full offload, the system is relieved not only of TCP data processing, but also of connection-management tasks. Full offloading may prove more advantageous than partial offloading in TCP connections characterized by frequent errors and lost connections. TOEs reduce end-to-end latency A TOE-enabled TCP/IP stack and NIC can help relieve network bottlenecks and improve data-transfer performance by eliminating much of the host processing overhead that the standard TCP/IP stack incurs. By reducing the amount of time the host system spends processing network transactions, administrators can increase available bandwidth for business applications. For instance, in a scenario in which an application server is connected to a backup server, and TOE-enabled NICs are installed on both systems, the TOE approach can significantly reduce backup times. Reduction in the time spent processing packet transmissions also reduces latency the time taken by a TCP/IP packet to travel from the source system to the destination system. By improving end-toend latency, TOEs can help speed response times for applications including digital media serving, NAS file serving, iscsi, Web and serving, video streaming, medical imaging, LAN-based backup and restore processes, and high-performance computing clusters. Part two of this article, which will appear in an upcoming issue, will include benchmark testing and analysis to measure the benefits of TOEs. Sandhya Senapathi ([email protected]) is a systems engineer with the Server OS Engineering team. Her fields of interest include operating systems, networking, and computer architecture. She has an M.S. in Computer Science from The Ohio State University. Rich Hernandez ([email protected]) is a technologist with the Dell Product Group. He has been in the computer and data networking industry for more than 19 years. Rich has a B.S. in Electrical Engineering from the University of Houston and has pursued postgraduate studies at Colorado Technical University. FOR MORE INFORMATION RDMA Consortium: POWER SOLUTIONS 107
Boosting Data Transfer with TCP Offload Engine Technology
Boosting Data Transfer with TCP Offload Engine Technology on Ninth-Generation Dell PowerEdge Servers TCP/IP Offload Engine () technology makes its debut in the ninth generation of Dell PowerEdge servers,
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
Gigabit Ethernet Design
Gigabit Ethernet Design Laura Jeanne Knapp Network Consultant 1-919-254-8801 [email protected] www.lauraknapp.com Tom Hadley Network Consultant 1-919-301-3052 [email protected] HSEdes_ 010 ed and
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Understanding TCP/IP. Introduction. What is an Architectural Model? APPENDIX
APPENDIX A Introduction Understanding TCP/IP To fully understand the architecture of Cisco Centri Firewall, you need to understand the TCP/IP architecture on which the Internet is based. This appendix
Transport Layer Protocols
Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements
Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect
White PAPER Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect Introduction: High Performance Data Centers As the data center continues to evolve to meet rapidly escalating
Computer Networks Vs. Distributed Systems
Computer Networks Vs. Distributed Systems Computer Networks: A computer network is an interconnected collection of autonomous computers able to exchange information. A computer network usually require
IEEE Congestion Management Presentation for IEEE Congestion Management Study Group
IEEE Congestion Management Presentation for IEEE Congestion Management Study Group Contributors Jeff Lynch IBM Gopal Hegde -- Intel 2 Outline Problem Statement Types of Traffic & Typical Usage Models Traffic
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Protocols and Architecture. Protocol Architecture.
Protocols and Architecture Protocol Architecture. Layered structure of hardware and software to support exchange of data between systems/distributed applications Set of rules for transmission of data between
Overview of Computer Networks
Overview of Computer Networks Client-Server Transaction Client process 4. Client processes response 1. Client sends request 3. Server sends response Server process 2. Server processes request Resource
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Broadcom Ethernet Network Controller Enhanced Virtualization Functionality
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
Basic Networking Concepts. 1. Introduction 2. Protocols 3. Protocol Layers 4. Network Interconnection/Internet
Basic Networking Concepts 1. Introduction 2. Protocols 3. Protocol Layers 4. Network Interconnection/Internet 1 1. Introduction -A network can be defined as a group of computers and other devices connected
ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM
ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 Outline The transport service Elements of transport protocols A
High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features
UDC 621.395.31:681.3 High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features VTsuneo Katsuyama VAkira Hakata VMasafumi Katoh VAkira Takeyama (Manuscript received February 27, 2001)
White Paper Abstract Disclaimer
White Paper Synopsis of the Data Streaming Logical Specification (Phase I) Based on: RapidIO Specification Part X: Data Streaming Logical Specification Rev. 1.2, 08/2004 Abstract The Data Streaming specification
Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB-02499-001_v02
Technical Brief DualNet with Teaming Advanced Networking October 2006 TB-02499-001_v02 Table of Contents DualNet with Teaming...3 What Is DualNet?...3 Teaming...5 TCP/IP Acceleration...7 Home Gateway...9
Optimizing Network Virtualization in Xen
Optimizing Network Virtualization in Xen Aravind Menon EPFL, Lausanne [email protected] Alan L. Cox Rice University, Houston [email protected] Willy Zwaenepoel EPFL, Lausanne [email protected]
1-Gigabit TCP Offload Engine
White Paper 1-Gigabit TCP Offload Engine Achieving greater data center efficiencies by providing Green conscious and cost-effective reductions in power consumption. June 2009 Background Broadcom is a recognized
Computer Network. Interconnected collection of autonomous computers that are able to exchange information
Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.
The proliferation of the raw processing
TECHNOLOGY CONNECTED Advances with System Area Network Speeds Data Transfer between Servers with A new network switch technology is targeted to answer the phenomenal demands on intercommunication transfer
Getting the most TCP/IP from your Embedded Processor
Getting the most TCP/IP from your Embedded Processor Overview Introduction to TCP/IP Protocol Suite Embedded TCP/IP Applications TCP Termination Challenges TCP Acceleration Techniques 2 Getting the most
Requirements of Voice in an IP Internetwork
Requirements of Voice in an IP Internetwork Real-Time Voice in a Best-Effort IP Internetwork This topic lists problems associated with implementation of real-time voice traffic in a best-effort IP internetwork.
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note
Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Internet SCSI (iscsi)
UPPER LAYER SWITCHING
52-20-40 DATA COMMUNICATIONS MANAGEMENT UPPER LAYER SWITCHING Gilbert Held INSIDE Upper Layer Operations; Address Translation; Layer 3 Switching; Layer 4 Switching OVERVIEW The first series of LAN switches
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Optimizing TCP Forwarding
Optimizing TCP Forwarding Vsevolod V. Panteleenko and Vincent W. Freeh TR-2-3 Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556 {vvp, vin}@cse.nd.edu Abstract
Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade
Application Note Windows 2000/XP TCP Tuning for High Bandwidth Networks mguard smart mguard PCI mguard blade mguard industrial mguard delta Innominate Security Technologies AG Albert-Einstein-Str. 14 12489
Protocols. Packets. What's in an IP packet
Protocols Precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet Protocol (bottom level) all packets shipped from network to network as IP packets
Access Control: Firewalls (1)
Access Control: Firewalls (1) World is divided in good and bad guys ---> access control (security checks) at a single point of entry/exit: in medieval castles: drawbridge in corporate buildings: security/reception
Communications and Computer Networks
SFWR 4C03: Computer Networks and Computer Security January 5-8 2004 Lecturer: Kartik Krishnan Lectures 1-3 Communications and Computer Networks The fundamental purpose of a communication system is the
Optimizing Network Virtualization in Xen
Optimizing Network Virtualization in Xen Aravind Menon EPFL, Switzerland Alan L. Cox Rice university, Houston Willy Zwaenepoel EPFL, Switzerland Abstract In this paper, we propose and evaluate three techniques
What is CSG150 about? Fundamentals of Computer Networking. Course Outline. Lecture 1 Outline. Guevara Noubir [email protected].
What is CSG150 about? Fundamentals of Computer Networking Guevara Noubir [email protected] CSG150 Understand the basic principles of networking: Description of existing networks, and networking mechanisms
Encapsulating Voice in IP Packets
Encapsulating Voice in IP Packets Major VoIP Protocols This topic defines the major VoIP protocols and matches them with the seven layers of the OSI model. Major VoIP Protocols 15 The major VoIP protocols
Cisco Integrated Services Routers Performance Overview
Integrated Services Routers Performance Overview What You Will Learn The Integrated Services Routers Generation 2 (ISR G2) provide a robust platform for delivering WAN services, unified communications,
Network Simulation Traffic, Paths and Impairment
Network Simulation Traffic, Paths and Impairment Summary Network simulation software and hardware appliances can emulate networks and network hardware. Wide Area Network (WAN) emulation, by simulating
High-Speed TCP Performance Characterization under Various Operating Systems
High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,
Chapter 9. IP Secure
Chapter 9 IP Secure 1 Network architecture is usually explained as a stack of different layers. Figure 1 explains the OSI (Open System Interconnect) model stack and IP (Internet Protocol) model stack.
TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance
TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented
Protocol Data Units and Encapsulation
Chapter 2: Communicating over the 51 Protocol Units and Encapsulation For application data to travel uncorrupted from one host to another, header (or control data), which contains control and addressing
Clearing the Way for VoIP
Gen2 Ventures White Paper Clearing the Way for VoIP An Alternative to Expensive WAN Upgrades Executive Overview Enterprises have traditionally maintained separate networks for their voice and data traffic.
InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
Pump Up Your Network Server Performance with HP- UX
Pump Up Your Network Server Performance with HP- UX Paul Comstock Network Performance Architect Hewlett-Packard 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject
Per-Flow Queuing Allot's Approach to Bandwidth Management
White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth
Linux NIC and iscsi Performance over 40GbE
Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71
Overview. Securing TCP/IP. Introduction to TCP/IP (cont d) Introduction to TCP/IP
Overview Securing TCP/IP Chapter 6 TCP/IP Open Systems Interconnection Model Anatomy of a Packet Internet Protocol Security (IPSec) Web Security (HTTP over TLS, Secure-HTTP) Lecturer: Pei-yih Ting 1 2
10 Gigabit Ethernet: Scaling across LAN, MAN, WAN
Arasan Chip Systems Inc. White Paper 10 Gigabit Ethernet: Scaling across LAN, MAN, WAN By Dennis McCarty March 2011 Overview Ethernet is one of the few protocols that has increased its bandwidth, while
Leveraging NIC Technology to Improve Network Performance in VMware vsphere
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE
White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308 Laura Knapp WW Business Consultant [email protected] Applied Expert Systems, Inc. 2011 1 Background
Latency Considerations for 10GBase-T PHYs
Latency Considerations for PHYs Shimon Muller Sun Microsystems, Inc. March 16, 2004 Orlando, FL Outline Introduction Issues and non-issues PHY Latency in The Big Picture Observations Summary and Recommendations
The OSI Model: Understanding the Seven Layers of Computer Networks
Expert Reference Series of White Papers The OSI Model: Understanding the Seven Layers of Computer Networks 1-800-COURSES www.globalknowledge.com The OSI Model: Understanding the Seven Layers of Computer
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
CS 78 Computer Networks. Internet Protocol (IP) our focus. The Network Layer. Interplay between routing and forwarding
CS 78 Computer Networks Internet Protocol (IP) Andrew T. Campbell [email protected] our focus What we will lean What s inside a router IP forwarding Internet Control Message Protocol (ICMP) IP
EITF25 Internet Techniques and Applications L5: Wide Area Networks (WAN) Stefan Höst
EITF25 Internet Techniques and Applications L5: Wide Area Networks (WAN) Stefan Höst Data communication in reality In reality, the source and destination hosts are very seldom on the same network, for
Improving Quality of Service
Improving Quality of Service Using Dell PowerConnect 6024/6024F Switches Quality of service (QoS) mechanisms classify and prioritize network traffic to improve throughput. This article explains the basic
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser
Network Performance Optimisation and Load Balancing Wulf Thannhaeuser 1 Network Performance Optimisation 2 Network Optimisation: Where? Fixed latency 4.0 µs Variable latency
A Transport Protocol for Multimedia Wireless Sensor Networks
A Transport Protocol for Multimedia Wireless Sensor Networks Duarte Meneses, António Grilo, Paulo Rogério Pereira 1 NGI'2011: A Transport Protocol for Multimedia Wireless Sensor Networks Introduction Wireless
Unified Fabric: Cisco's Innovation for Data Center Networks
. White Paper Unified Fabric: Cisco's Innovation for Data Center Networks What You Will Learn Unified Fabric supports new concepts such as IEEE Data Center Bridging enhancements that improve the robustness
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
Intel Data Direct I/O Technology (Intel DDIO): A Primer >
Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
IxChariot Virtualization Performance Test Plan
WHITE PAPER IxChariot Virtualization Performance Test Plan Test Methodologies The following test plan gives a brief overview of the trend toward virtualization, and how IxChariot can be used to validate
THE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS
THE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS 159 APPLICATION NOTE Bruno Giguère, Member of Technical Staff, Transport & Datacom Business Unit, EXFO As end-users are migrating
Storage Networking Foundations Certification Workshop
Storage Networking Foundations Certification Workshop Duration: 2 Days Type: Lecture Course Description / Overview / Expected Outcome A group of students was asked recently to define a "SAN." Some replies
Microsoft Exchange Server 2003 Deployment Considerations
Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers
Storage at a Distance; Using RoCE as a WAN Transport
Storage at a Distance; Using RoCE as a WAN Transport Paul Grun Chief Scientist, System Fabric Works, Inc. (503) 620-8757 [email protected] Why Storage at a Distance the Storage Cloud Following
High Speed I/O Server Computing with InfiniBand
High Speed I/O Server Computing with InfiniBand José Luís Gonçalves Dep. Informática, Universidade do Minho 4710-057 Braga, Portugal [email protected] Abstract: High-speed server computing heavily relies on
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
Overview of TCP/IP. TCP/IP and Internet
Overview of TCP/IP System Administrators and network administrators Why networking - communication Why TCP/IP Provides interoperable communications between all types of hardware and all kinds of operating
Latency on a Switched Ethernet Network
Application Note 8 Latency on a Switched Ethernet Network Introduction: This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency
The Case Against Jumbo Frames. Richard A Steenbergen <[email protected]> GTT Communications, Inc.
The Case Against Jumbo Frames Richard A Steenbergen GTT Communications, Inc. 1 What s This All About? What the heck is a Jumbo Frame? Technically, the IEEE 802.3 Ethernet standard defines
IP address format: Dotted decimal notation: 10000000 00001011 00000011 00011111 128.11.3.31
IP address format: 7 24 Class A 0 Network ID Host ID 14 16 Class B 1 0 Network ID Host ID 21 8 Class C 1 1 0 Network ID Host ID 28 Class D 1 1 1 0 Multicast Address Dotted decimal notation: 10000000 00001011
Auspex Support for Cisco Fast EtherChannel TM
Auspex Support for Cisco Fast EtherChannel TM Technical Report 21 Version 1.0 March 1998 Document 300-TC049, V1.0, 980310 Auspex Systems, Inc. 2300 Central Expressway Santa Clara, California 95050-2516
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Transport and Network Layer
Transport and Network Layer 1 Introduction Responsible for moving messages from end-to-end in a network Closely tied together TCP/IP: most commonly used protocol o Used in Internet o Compatible with a
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
Cisco Application Networking for Citrix Presentation Server
Cisco Application Networking for Citrix Presentation Server Faster Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
Indian Institute of Technology Kharagpur. TCP/IP Part I. Prof Indranil Sengupta Computer Science and Engineering Indian Institute of Technology
Indian Institute of Technology Kharagpur TCP/IP Part I Prof Indranil Sengupta Computer Science and Engineering Indian Institute of Technology Kharagpur Lecture 3: TCP/IP Part I On completion, the student
Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
Names & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding
Names & Addresses EE 122: IP Forwarding and Transport Protocols Scott Shenker http://inst.eecs.berkeley.edu/~ee122/ (Materials with thanks to Vern Paxson, Jennifer Rexford, and colleagues at UC Berkeley)
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
PART OF THE PICTURE: The TCP/IP Communications Architecture
PART OF THE PICTURE: The / Communications Architecture 1 PART OF THE PICTURE: The / Communications Architecture BY WILLIAM STALLINGS The key to the success of distributed applications is that all the terminals
CSIS 3230. CSIS 3230 Spring 2012. Networking, its all about the apps! Apps on the Edge. Application Architectures. Pure P2P Architecture
Networking, its all about the apps! CSIS 3230 Chapter 2: Layer Concepts Chapter 5.4: Link Layer Addressing Networks exist to support apps Web Social ing Multimedia Communications Email File transfer Remote
EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak
Packet Switching and Computer Networks Switching As computer networks became more pervasive, more and more data and also less voice was transmitted over telephone lines. Circuit Switching The telephone
Chapter 3. TCP/IP Networks. 3.1 Internet Protocol version 4 (IPv4)
Chapter 3 TCP/IP Networks 3.1 Internet Protocol version 4 (IPv4) Internet Protocol version 4 is the fourth iteration of the Internet Protocol (IP) and it is the first version of the protocol to be widely
CSE 3461 / 5461: Computer Networking & Internet Technologies
Autumn Semester 2014 CSE 3461 / 5461: Computer Networking & Internet Technologies Instructor: Prof. Kannan Srinivasan 08/28/2014 Announcement Drop before Friday evening! k. srinivasan Presentation A 2
Improved Digital Media Delivery with Telestream HyperLaunch
WHITE PAPER Improved Digital Media Delivery with Telestream THE CHALLENGE Increasingly, Internet Protocol (IP) based networks are being used to deliver digital media. Applications include delivery of news
QoS & Traffic Management
QoS & Traffic Management Advanced Features for Managing Application Performance and Achieving End-to-End Quality of Service in Data Center and Cloud Computing Environments using Chelsio T4 Adapters Chelsio
