Network Performance - Theory

Similar documents
perfsonar Host Hardware

TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux perfsonar developer

Iperf Tutorial. Jon Dugan Summer JointTechs 2010, Columbus, OH

Protocol Data Units and Encapsulation

perfsonar Host Hardware

Question: 3 When using Application Intelligence, Server Time may be defined as.

EITF25 Internet Techniques and Applications L5: Wide Area Networks (WAN) Stefan Höst

Indian Institute of Technology Kharagpur. TCP/IP Part I. Prof Indranil Sengupta Computer Science and Engineering Indian Institute of Technology

Guide to TCP/IP, Third Edition. Chapter 3: Data Link and Network Layer TCP/IP Protocols

CCT vs. CCENT Skill Set Comparison

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Windows Server Performance Monitoring

Performance Measurement of Wireless LAN Using Open Source

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks

Internet structure: network of networks

VMWARE WHITE PAPER 1

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

Network Layer: Network Layer and IP Protocol

Master s Thesis Evaluation and implementation of a network performance measurement tool for analyzing network performance under heavy workloads

IP SAN BEST PRACTICES

Data Communication and Computer Network

Measuring IP Performance. Geoff Huston Telstra

Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT:

OpenFlow with Intel Voravit Tanyingyong, Markus Hidell, Peter Sjödin

How To Switch In Sonicos Enhanced (Sonicwall) On A 2400Mmi 2400Mm2 (Solarwall Nametra) (Soulwall 2400Mm1) (Network) (

SFWR 4C03: Computer Networks & Computer Security Jan 3-7, Lecturer: Kartik Krishnan Lecture 1-3

Overview of Network Measurement Tools

Overview of Computer Networks

Wave Relay System and General Project Details

IP SAN Best Practices

Final for ECE374 05/06/13 Solution!!

Router and Routing Basics

Tier3 Network Issues. Richard Carlson May 19, 2009

Transport Layer Protocols

AlliedWare Plus OS How To Use sflow in a Network

Introduction to Routing and Packet Forwarding. Routing Protocols and Concepts Chapter 1

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

Agenda. Distributed System Structures. Why Distributed Systems? Motivation

OpenFlow: Load Balancing in enterprise networks using Floodlight Controller

Lecture 2: Protocols and Layering. CSE 123: Computer Networks Stefan Savage

D1.2 Network Load Balancing

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

MEASURING WIRELESS NETWORK CONNECTION QUALITY

CompTIA Exam N CompTIA Network+ certification Version: 5.1 [ Total Questions: 1146 ]

RARP: Reverse Address Resolution Protocol

Understanding TCP/IP. Introduction. What is an Architectural Model? APPENDIX

Highly parallel, lock- less, user- space TCP/IP networking stack based on FreeBSD. EuroBSDCon 2013 Malta

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Fiber Channel Over Ethernet (FCoE)

Network Probe. Figure 1.1 Cacti Utilization Graph

Intel DPDK Boosts Server Appliance Performance White Paper

Data Communication Networks and Converged Networks

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Understanding OpenFlow

Agenda. sflow intro. sflow architecture. sflow config example. Summary

Ethernet (LAN switching)

CPS221 Lecture: Layered Network Architecture

Network Monitoring. Sebastian Büttrich, NSRC / IT University of Copenhagen Last edit: February 2012, ICTP Trieste

RoCE vs. iwarp Competitive Analysis

Quality of Service Analysis of site to site for IPSec VPNs for realtime multimedia traffic.

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

ESSENTIALS. Understanding Ethernet Switches and Routers. April 2011 VOLUME 3 ISSUE 1 A TECHNICAL SUPPLEMENT TO CONTROL NETWORK

Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment

EVALUATING NETWORK BUFFER SIZE REQUIREMENTS

Voice over IP. Demonstration 1: VoIP Protocols. Network Environment

MONITORING NETWORK TRAFFIC USING sflow TECHNOLOGY ON EX SERIES ETHERNET SWITCHES

SSVP SIP School VoIP Professional Certification

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

College 5, Routing, Internet. Host A. Host B. The Network Layer: functions

Deployments and Tests in an iscsi SAN

isco Troubleshooting Input Queue Drops and Output Queue D

Layered Architectures and Applications

Chapter 11. User Datagram Protocol (UDP)

Allocating Network Bandwidth to Match Business Priorities

Comparing the Network Performance of Windows File Sharing Environments

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology

Introduction to LAN/WAN. Network Layer (part II)

Model 2120 Single Port RS-232 Terminal Server Frequently Asked Questions

Measure wireless network performance using testing tool iperf

ITL Lab 5 - Performance Measurements and SNMP Monitoring 1. Purpose

IST 220 Exam 3 Notes Prepared by Dan Veltri

Operating Systems Design 16. Networking: Sockets

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

Computer Networks CS321

Course Description. Students Will Learn

What is Network Latency and Why Does It Matter?

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Networking Test 4 Study Guide

Cisco NetFlow TM Briefing Paper. Release 2.2 Monday, 02 August 2004

Overview of Routing between Virtual LANs

11.1. Performance Monitoring

Chapter 7 Low-Speed Wireless Local Area Networks

Wide Area Networks. Learning Objectives. LAN and WAN. School of Business Eastern Illinois University. (Week 11, Thursday 3/22/2007)

Performance Evaluation of Linux Bridge

High Speed I/O Server Computing with InfiniBand

Transcription:

Network Performance - Theory This document is a result of work by the perfsonar Project (h@p://www.perfsonar.net) and is licensed under CC BY- SA 4.0 (h@ps://creajvecommons.org/licenses/by- sa/4.0/). Event Presenter, OrganizaJon, Email Date

IntroducJon perfsonar is a tool to measure end- to- end network performance. What does this imply: End- to- end: the enjre network path between perfsonar Hosts ApplicaJons SoVware OperaJng System Host Hardware @ Each Hop: transijon between OSI Layers in roujng/switching devices (e.g. Transport to Network Layer, etc.), buffering, processing speed Flow through security devices No easy way to separate out the individual components by default the number the tool gives has them all combined 6

IniJal End- to- End Network 8

IniJal End- to- End Network Src Host Delay: ApplicaJon wrijng to OS (kernel) Kernel wrijng via memory to hardware NIC wrijng to network Src LAN: Buffering on ingress interface queues Processing data for desjnajon interface Egress interface queuing Transmission/SerializaJon to wire Dst Host Delay: NIC receiving data Kernel allocajng space, sending to applicajon ApplicaJon reading/acjng on received data Dst LAN: Buffering on ingress interface queues Processing data for desjnajon interface Egress interface queuing Transmission/SerializaJon to wire WAN: PropagaJon delay for long spans Ingress queuing/processing/egress queuing/serializajon for each hop 9

OSI Stack Reminder The demarcajon between each layer has an API (e.g. the narrow waist of an hourglass) Some layers are more well defined than others: Within an applicajon the job of presentajon and session may be handled The operajng system handles TCP and IP, although these are separate libraries Network/Data Link occur within network devices (Routers, Switches) 10

Most applicajons are wri@en in user space, e.g. special secjon of the OS that is jailed from kernel space. Requests to use funcjons like the network are done by using system calls through an API (e.g. open a socket so I can communicate with a remote host) The TCP/IP libraries are within the kernel, they receive the request and take care of the heavy living of converjng the data from the applicajon (e.g. a large chunk of memory) into individual packets for the network The NIC will then encapsulate into Link layer protocol (e.g. ethernet frames) and send onto the wire for the next hop to deal with Host Breakout 11

The receive side works similar just in reverse Frames come off of the network and into the NIC. The onboard processor will extract packets, and pass them to the kernel The kernel will map the packets to the applicajon that should be dealing with them The applicajon will receive the data via the API Note the TCP/IP libraries manage things like data control. The applicajon only sees a socket, and knows that it will send in data, and it will make it to the other side. It is the job of the library to ensure reliable delivery Host Breakout 12

Network Device Breakout 14

Network Device Breakout Data arrives from muljple sources Buffers have a finite amount of memory Some have this per interface Others may have access to a shared memory region with other interfaces The processing engine will: Extract each packet/frame from the queues Pull off header informajon to see where the desjnajon should be Move the packet/frame to the correct output queue AddiJonal delay is possible as the queues physically write the packet to the transport medium (e.g. opjcal interface, copper interface) 15

Network Device Breakout Delays 16

Network Devices & OSI Not every device will care about every layer Hosts understand them all via various libraries Network devices only know up to a point: Routers know up to the Network Layer. They will make the choice of sending to the next hop based on Network Layer headers. E.g. TCP informajon IP addresses Switches know up to the Link Layer. They will make the choice of sending to the next hop based on Link Layer headers. E.g. MAC addressing from the IP Each hop has the hardware/sovware to pull of the encapsulated data to find what it needs 18

End- to- End A network user interfaces with the network via a tool (data movement applicajon, portal). They only get a single piece of feedback how long the interacjon takes In reality it s a complex series of moves to get the data end to end, with limited visibility by default Delays on the source host Delays in the source LAN Delays in the WAN Delays in the desjnajon LAN Delays on the desjnajon host 19

End- to- End The only way to get visibility is to rely on instrumentajon at the various layers: Host level monitoring of key components (CPU, memory, network) LAN/WAN level monitoring of individual network devices (ujlizajon, drops/discards, errors) End- to- end simulajons The later one is tricky we want to simulate what a user would see by having our own (well tuned) applicajon tell us how it can do across the common network substrate 20

Dereferencing Individual Components Host Performance SoVware Tools Ganglia, Host SNMP + CacJ/MRTG Network Device Performance SNMP/TL1 Passive Polling (e.g. interface counters, internal behavior) SoVware Performance??? This depends heavily on how well the sovware (e.g. operajng system, applicajon) is instrumented. End- to- end perfsonar acjve tools (iperf, owamp, etc.) 21

Takeaways Since we want network performance we want to remove the host hardware/operajng system/applicajons from the equajon as much as possible Things that we can do on our own, or that we get for free by using perfsonar: Host Hardware: Choosing hardware ma@ers. There needs to be predictable interacjons between system components (NIC, motherboard, memory, processors) OperaJng System: perfsonar features a tuned version of CentOS. This version eliminates extra sovware and has been modified to allow for high performance networking ApplicaJons: perfsonar applicajons are designed to make minimal system calls, and do not involve the disk subsystem. The performance they report is designed to be as low- impact on the host to achieve realisjc network performance 22

Lets Talk about IPERF Start with a definijon: network throughput is the rate of successful message delivery over a communicajon channel Easier terms: how much data can I shovel into the network for some given amount of Jme Things it includes: the operajng system, the host hardware, and the enjre netowork path What does this tell us? Opposite of ujlizajon (e.g. its how much we can get at a given point in Jme, minus what is ujlized) UJlizaJon and throughput added together are capacity Tools that measure throughput are a simulajon of a real work use case (e.g. how well could bulk data movement perform) 24

What IPERF Tells Us Lets start by describing throughput, which is vague. Capacity: link speed Narrow Link: link with the lowest capacity along a path Capacity of the end- to- end path = capacity of the narrow link UJlized bandwidth: current traffic load Available bandwidth: capacity ujlized bandwidth Tight Link: link with the least available bandwidth in a path Achievable bandwidth: includes protocol and host issues (e.g. BDP!) All of this is memory to memory, e.g. we are not involving a spinning disk (more later) 45 Mbps 10 Mbps 100 Mbps 45 Mbps source (Shaded portion shows background traffic) Narrow Link Tight Link sink 25

BWCTL Example (iperf3) [zurawski@wash-pt1 ~]$ bwctl -T iperf3 -f m -t 10 -i 2 -c sunn-pt1.es.net bwctl: run_tool: tester: iperf3 bwctl: run_tool: receiver: 198.129.254.58 bwctl: run_tool: sender: 198.124.238.34 bwctl: start_tool: 3598657653.219168 Test initialized Running client Connecting to host 198.129.254.58, port 5001 [ 17] local 198.124.238.34 port 34277 connected to 198.129.254.58 port 5001 [ ID] Interval Transfer Bandwidth Retransmits [ 17] 0.00-2.00 sec 430 MBytes 1.80 Gbits/sec 2 [ 17] 2.00-4.00 sec 680 MBytes 2.85 Gbits/sec 0 [ 17] 4.00-6.00 sec 669 MBytes 2.80 Gbits/sec 0 [ 17] 6.00-8.00 sec 670 MBytes 2.81 Gbits/sec 0 [ 17] 8.00-10.00 sec 680 MBytes 2.85 Gbits/sec 0 [ ID] Interval Transfer Bandwidth Retransmits Sent [ 17] 0.00-10.00 sec 3.06 GBytes 2.62 Gbits/sec 2 Received [ 17] 0.00-10.00 sec 3.06 GBytes 2.63 Gbits/sec N.B. This is what perfsonar Graphs the average of the complete test iperf Done. bwctl: stop_tool: 3598657664.995604 SENDER END 26

Summary We have established that our tools are designed to measure the network For be@er or for worse the network is also our host hardware, operajng system, and applicajon To get the most accurate measurement, we need: Hardware that performs well OperaJng systems that perform well ApplicaJons that perform well 27

Network Performance - Theory This document is a result of work by the perfsonar Project (h@p://www.perfsonar.net) and is licensed under CC BY- SA 4.0 (h@ps://creajvecommons.org/licenses/by- sa/4.0/). Event Presenter, OrganizaJon, Email Date