High Performance Bulk Data Transfer
|
|
|
- Phyllis Williams
- 10 years ago
- Views:
Transcription
1 High Performance Bulk Data Transfer Brian Tierney and Joe Metzger, ESnet Joint Techs, Columbus OH, July, 2010
2 Why does the network seem so slow?
3 Wizard Gap Slide from Matt Mathis, PSC
4 Today s Talk This talk will cover: Some Information to help you become a wizard Work being done so you don t have to be a wizard Goal of this talk: Help you fully optimize wide area bulk data transfers - or help your users do this Outline TCP Issues Bulk Data Transfer Tools Router Issues Network Monitoring/Troubleshooting Tools
5 Time to Copy 1 Terabyte 10 Mbps network : 300 hrs (12.5 days) 100 Mbps network : 30 hrs 1 Gbps network : 3 hrs (are your disks fast enough?) 10 Gbps network : 20 minutes (need really fast disks and filesystem) These figures assume some headroom left for other users Compare these speeds to: USB 2.0 portable disk - 60 MB/sec (480 Mbps) peak - 20 MB/sec (160 Mbps) reported on line MB/sec reported by colleagues hours to load 1 Terabyte
6 Bandwidth Requirements This table available at
7 Throughput? Bandwidth? What? The term throughput is vague Capacity: link speed - Narrow Link: link with the lowest capacity along a path - Capacity of the end-to-end path = capacity of the narrow link Utilized bandwidth: current traffic load Available bandwidth: capacity utilized bandwidth - Tight Link: link with the least available bandwidth in a path Achievable bandwidth: includes protocol and host issues 45 Mbps 10 Mbps 100 Mbps 45 Mbps source sink Narrow Link (Shaded portion shows background traffic) Tight Link
8 More Terminology Latency: time to send 1 packet from the source to the destination RTT: Round-trip time Bandwidth*Delay Product = BDP The number of bytes in flight to fill the entire path Example: 100 Mbps path; ping shows a 75 ms RTT - BDP = 100 * = 7.5 Mbits (940 KBytes) LFN: Long Fat Networks A network with a large BDP
9 How TCP works: A very short overview Congestion window (CWND) = the number of packets the sender is allowed to send The larger the window size, the higher the throughput - Throughput = Window size / Round-trip Time TCP Slow start exponentially increase the congestion window size until a packet is lost - this gets a rough estimate of the optimal congestion window size CWND packet loss timeout slow start: exponential increase congestion avoidance: linear increase retransmit: slow start again time
10 TCP Overview Congestion avoidance additive increase: starting from the rough estimate, linearly increase the congestion window size to probe for additional available bandwidth multiplicative decrease: cut congestion window size aggressively if a timeout occurs CWND packet loss timeout slow start: exponential increase congestion avoidance: linear increase retransmit: slow start again time
11 TCP Overview Fast Retransmit: retransmit after 3 duplicate acks (got 3 additional packets without getting the one you are waiting for) this prevents expensive timeouts no need to go into slow start again At steady state, CWND oscillates around the optimal window size With a retransmission timeout, slow start is triggered again CWND packet loss timeout slow start: exponential increase congestion avoidance: linear increase retransmit: slow start again time
12 TCP Tuning
13 Host Tuning TCP TCP tuning commonly refers to the proper configuration of buffers that correspond to TCP windowing Historically TCP tuning parameters were host-global, with exceptions configured per-socket by applications Applications had to understand the network in detail, and know how far away clients were Some applications did this most did not Solution: auto-tune TCP connections within pre-configured limits
14 Setting the TCP buffer sizes It is critical to use the optimal TCP send and receive socket buffer sizes for the link you are using. Recommended size to fill the pipe - 2 x Bandwidth Delay Product (BDP) Recommended size to leave some bandwidth for others - around 20% of (2 x BPB) =.4 * BDP Default TCP buffer sizes can be way too small for today s high speed networks Until recently, default TCP send/receive buffers were typically 64 KB - Only way to change this was for the application to call the setsockopt() system call tuned buffer to fill CA to NY link: 10 MB - 150X bigger than the default buffer size
15 Buffer Size Example Optimal Buffer size formula: buffer size = 20% * (bandwidth * RTT) (assuming your target is 20% of the narrow link) Example: ping time (RTT) = 50 ms Narrow link = 1000 Mbps (125 MBytes/sec) TCP buffers should be: -.05 sec * 125 * 20% = 1.25 MBytes
16 Socket Buffer Autotuning To solve the buffer tuning problem, based on work at LANL and PSC, Linux OS added TCP Buffer autotuning Sender-side TCP buffer autotuning introduced in Linux 2.4 Receiver-side autotuning added in Linux 2.6 Most OS s now include TCP autotuning TCP send buffer starts at 64 KB As the data transfer takes place, the buffer size is continuously readjusted up max autotune size Current OS Autotuning default maximum buffers Linux 2.6: 256K to 4MB, depending on version Windows Vista: 16M Mac OSX : 4M FreeBSD 7: 256K
17 Huge number of hosts are still receive window limited Results from M-Lab NDT tests, Feb-Sept, 2009 (over 400K tests) 34% of tests had no congestion events i.e.: limited by RWIN, and fixable via buffer tuning See white paper Understanding broadband speed measurements, by Steve Bauer, David Clark, William Lehr, Massachusetts Institute of Technology (URL) Bauer_Clark_Lehr_Broadband_Spee d_measurements.pdf Figure 11: Scatter plots of the measured download speeds for a test versus the average round trip time for a connection. The plot on the left is the subset of all tests that did not receive any congestion signals, on the right is all tests. The receiver window (rwnd) plot lines represent the theoretical speed predicted by the formula rwnd/average-rtt. Each test is colored coded by the range the max receiver window TCP sent during the connection falls into. Blue dots (near the third line from the top) for instance have a max receiver window that is greater than 16KB and less than 64KB. (We actually shift each of these bucket boundaries upward by 10,000 bytes to account for slight difference in the rwnd advertised 7/12/10 by different OSes.) ESnet Template Examples 17 There are a number of lessons to take away from these plots. 74 The first is that the receiver window has a considerable effect on the speeds that can be achieved. If one looks at any vertical
18 Autotuning Settings: Linux 2.6: add to /etc/sysctl.conf net.core.rmem_max = net.core.wmem_max = # autotuning min, default, and max number of bytes to use net.ipv4.tcp_rmem = net.ipv4.tcp_wmem = FreeBSD 7.0+: add to /etc/sysctl.conf net.inet.tcp.sendbuf_max= net.inet.tcp.recvbuf_max= Mac OSX: add to /etc/sysctl.conf kern.ipc.maxsockbuf= ! net.inet.tcp.sendspace= ! net.inet.tcp.recvspace= ! Windows XP use DrTCP ( to modify registry settings to increase TCP buffers Windows Vista/Windows 7: autotunes by default, 16M Buffers
19 Parallel Streams Can Help
20 Parallel Streams Issues Potentially unfair Places more load on the end hosts But they are necessary when you don t have root access, and can t convince the sysadmin to increase the max TCP buffers Most web browsers will open around 4 parallel streams by default graph from Tom Dunigan, ORNL
21 Another Issue: TCP congestion control Path = LBL to CERN (Geneva) OC-3, (in 2000), RTT = 150 ms average BW = 30 Mbps
22 TCP Response Function Well known fact that TCP Reno does not scale to high-speed networks Average TCP congestion window = p = packet loss rate see: What this means: 1.2 segments For a TCP connection with 1500-byte packets and a 100 ms roundtrip time - filling a 10 Gbps pipe would require a congestion window of 83,333 packets, - a packet drop rate of at most one drop every 5,000,000,000 packets. requires at most one packet loss every 6000s, or 1h:40m to keep the pipe full p
23 Proposed TCP Modifications Many proposed alternate congestion control algorithms: BIC/CUBIC HTCP: (Hamilton TCP) Scalable TCP Compound TCP And several more
24 Linux Results " TCP Results Note that BIC reaches Max throughput MUCH faster 600 Mbits/second Linux 2.6, BIC TCP Linux 2.4 Linux 2.6, BIC off time slot (5 second intervals) RTT = 67 ms
25 Comparison of Various TCP Congestion Control Algorithms
26 Congestion Algorithms In various Linux Distributions Linux Distribution Available Algorithms Default Algorithm Centos bic, reno bic Debian bic, reno bic Debian Unstable (future 6?) cubic, reno cubic Fedora cubic, reno cubic Redhat 5.4 bic, reno bic Ubuntu 8.0 reno reno Ubuntu 8.10 cubic, reno cubic Ubuntu 9.04 cubic, reno cubic Ubuntu 9.10 cubic, reno cubic Note: CUBIC and BIC both from same group, and they recommend CUBIC because its slightly more fair to competing traffic.
27 Selecting TCP Congestion Control in Linux To determine current configuration: sysctl -a grep congestion net.ipv4.tcp_congestion_control = cubic net.ipv4.tcp_available_congestion_control = cubic reno Use /etc/sysctl.conf to set to any available congested congestion control. Supported options (may need to enabled by default in your kernel): CUBIC, BIC, HTCP, HSTCP, STCP, LTCP, more.. E.g.: Centos 5.5 includes these: - CUBIC, HSTCP, HTCP, HYBLA, STCP, VEGAS, VENO, Westwood Use modprobe to add: /sbin/modprobe tcp_htcp /sbin/modprobe tcp_cubic
28 Additional Host Tuning for Linux Linux by default caches ssthresh, so one transfer with lots of congestion will throttle future transfers. To turn that off set:!net.ipv4.tcp_no_metrics_save = 1! Also should change this for 10GE!net.core.netdev_max_backlog = ! Warning on Large MTUs: If you have configured your Linux host to use 9K MTUs, but the MTU discovery reduces this to 1500 byte packets, then you actually need 9/1.5 = 6 times more buffer space in order to fill the pipe. Some device drivers only allocate memory in power of two sizes, so you may even need 16/1.5 = 11 times more buffer space! 7/12/10 ESnet Template Examples 28
29 NIC Tuning (See: Defaults are usually fine for 1GE, but 10GE often requires additional tuning Myricom 10Gig NIC The Myricom NIC provides a LOTS of tuning knobs. For more information see links on fasterdata In particular, you might benefit from increasing the interrupt coalescing timer: ethtool -C interface rx-usecs 100 Also consider using the Throttle option Chelsio 10Gig NIC TCP Segmentation Offloading (TSO) and TCP Offload Engine (TOE) on the Chelsio NIC radically hurt performance on a WAN (they do help reduce CPU load without affecting throughput on a LAN). To turn off TSO do this: ethtool -K interface tso off 7/12/10 ESnet Template Examples 29
30 Other OS tuning knobs Mac OSX: sysctl -w net.inet.tcp.win_scale_factor=8! Windows Vista/7: TCP autotuning included, 16M buffers Use Compound TCP: netsh interface tcp set global congestionprovider=ctcp 7/12/10 Joint Techs, Summer
31 Summary so far To optimize TCP throughput, do the following: Use a newer OS that supports TCP buffer autotuning Increase the maximum TCP autotuning buffer size Use a few parallel streams if possible Use a modern congestion control algorithm
32 Bulk Data Transfer Tools
33 Data Transfer Tools Parallelism is key It is much easier to achieve a given performance level with four parallel connections than one connection Several tools offer parallel transfers Latency interaction is critical Wide area data transfers have much higher latency than LAN transfers Many tools and protocols assume a LAN Examples: SCP/SFTP, HPSS mover protocol
34 Sample Data Transfer Results: Berkeley CA to Brookhaven, NY, 1 Gbps path Using the right tool is very important scp / sftp : 10 Mbps - standard Unix file copy tools - fixed 1 MB TCP window in openssh only 64 KB in openssh versions < 4.7 ftp : Mbps - assumes TCP buffer autotuning parallel stream FTP: Mbps
35 Why Not Use SCP or SFTP? Pros: Most scientific systems are accessed via OpenSSH SCP/SFTP are therefore installed by default Modern CPUs encrypt and decrypt well enough for small to medium scale transfers Credentials for system access and credentials for data transfer are the same Cons: The protocol used by SCP/SFTP has a fundamental flaw that limits WAN performance CPU speed doesn t matter latency matters Fixed-size buffers reduce performance as latency increases It doesn t matter how easy it is to use SCP and SFTP they simply do not perform Verdict: Do Not Use Without Performance Patches
36 A Fix For scp/sftp PSC has a patch set that fixes problems with SSH projects/hpn-ssh/ Significant performance increase Advantage this helps rsync too
37 sftp Uses same code as scp, so don't use sftp WAN transfers unless you have installed the HPN patch from PSC But even with the patch, SFTP has yet another flow control mechanism By default, sftp limits the total number of outstanding messages to 16 32KB messages. Since each datagram is a distinct message you end up with a 512KB outstanding data limit. You can increase both the number of outstanding messages ('-R') and the size of the message ('-B') from the command line though. Sample command for a 128MB window: sftp -R 512 -B user@host:/path/to/file outfile
38 GridFTP GridFTP from ANL has everything needed to fill the network pipe Buffer Tuning Parallel Streams Supports multiple authentication options anonymous ssh (available in starting with Globus Toolkit version 4.2) X509 Ability to define a range of data ports helpful to get through firewalls Sample Use: globus-url-copy -p 4 sshftp://data.lbl.gov/home/mydata/myfile file://home/mydir/myfile Available from:
39 newer GridFTP Features ssh authentication option Not all users need or want to deal with X.509 certificates Solution: Use SSH for Control Channel - Data channel remains as is, so performance is the same see for a quick start guide Optimizations for small files Concurrency option (-cc) - establishes multiple control channel connections and transfer multiple files simultaneously. Pipelining option: - Client sends next request before the current completes Cached Data channel connections - Reuse established data channels (Mode E) - No additional TCP or GSI connect overhead Support for UDT protocol
40 Globus.org: The latest iteration of Globus software The same Globus vision, but an updated approach Hosted services Data movement initially Execution, information, and VO services to follow Goals: Provide scientists with easy access to advanced computing resources - Technology interactions that require no special expertise - No software to install Enable users to focus on domain-specific work - Manage technology failures - Notifications of interesting events - Provide users with enough information to efficiently resolve problems
41 Globus.org Manages 3 rd -Party Transfers Globus.org GridFTP Control GridFTP Data GridFTP Control GridFTP Data
42 Other Tools bbcp: supports parallel transfers and socket tuning bbcp -P 4 -v -w 2M myfile remotehost:filename lftp: parallel file transfer, socket tuning, HTTP transfers, and more. lftp -e 'set net:socket-buffer ; pget -n 4 [http ftp]://site/path/ file; quit' axel: simple parallel accelerator for HTTP and FTP. axel -n 4 [http ftp]://site/file
43 Download Managers There are a number of nice browser plugins that can be used to speed up web-initiated data transfers all support parallel transfers Firefox add-on (All OSes): DownThemAll : this is my favorite: probably the best/simplest solution For Linux: aria: For Windows: FDM: For OSX: Speed Download: ($25)
44 FTP Tool: Filezilla Open Source: includes client and server Works on Windows, OSX, Linux Features: ability to transfer multiple files in parallel
45 Phoebus:
46 Other Bulk Data Transfer Issues Firewalls many firewalls can t handle 1 Gbps flows - designed for large number of low bandwidth flow - some firewalls even strip out TCP options that allow for TCP buffers > 64 KB Disk Performance In general need a RAID array to get more than around 500 Mbps
47 Network troubleshooting tools
48 Iperf iperf : nice tool for measuring end-to-end TCP/UDP performance Can be quite intrusive to the network in UDP mode Server (receiver): iperf -s Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) [ 4] local port 5001 connected with port [ 4] sec 1.09 GBytes 933 Mbits/sec [ 4] local port 5001 connected with port [ 4] sec 1.08 GBytes 931 Mbits/sec Client (sender): iperf -c Client connecting to , TCP port 5001 TCP window size: 129 KByte (default) [ 3] local port connected with port 5001 [ ID] Interval Transfer Bandwidth [ 3] sec 1.09 GBytes 913 Mbits/sec
49 Iperf3 ( Iperf3 is a complete re-write of iperf with the following goals Easier to maintain (simple C code instead of C++) Built on a library that other programs can call Multiple language binding via SWIG (eventually) Easy to parse output format (name=value pairs) Old format still supported too Separate control channel Current Status: TCP only, UDP coming soon, Jon Dugan, ESnet, is the lead developer Looking for volunteers to help test (and contribute to the code!)
50 bwctl ( Bwctl is a wrapper for tools like iperf and nuttcp that provides: Allows only 1 test at a time Controls who can run tests and for how long Controls maximum UDP bandwidth Sample client command: bwctl -c hostname t 20 -f m -i 2 x bwctl: exec_line: /usr/local/bin/iperf -c B f m -m -p t 20 -i 2! [ 9] local port 5046 connected with port 5046! [ ID] Interval Transfer Bandwidth! [ 9] sec 423 MBytes 1773 Mbits/sec! [ 9] sec 589 MBytes 2471 Mbits/sec!! [ 9] sec 755 MBytes 3166 Mbits/sec! [ 9] sec 7191 MBytes 3016 Mbits/sec! [ 9] MSS size 8192 bytes (MTU 8232 bytes, unknown interface)!
51 Using bwctl Sample results bwctl -c hostname t 20 -f m -i 2 x bwctl: exec_line: /usr/local/bin/iperf -c B f m -m -p t 20 -i 2 [ 9] local port 5046 connected with port 5046 [ ID] Interval Transfer Bandwidth [ 9] sec 423 MBytes 1773 Mbits/sec [ 9] sec 589 MBytes 2471 Mbits/sec [ 9] sec 709 MBytes 2973 Mbits/sec [ 9] sec 754 MBytes 3162 Mbits/sec [ 9] sec 755 MBytes 3166 Mbits/sec [ 9] sec 7191 MBytes 3016 Mbits/sec [ 9] MSS size 8192 bytes (MTU 8232 bytes, unknown interface) 51
52 NDT/NPAD: web100-based diagnostic tools Network Diagnistics tool (NDT): Good for finding local issues like duplex mis-match, and small TCP buffer. Multi-level results allow novice and expert users to view and understand the test results. One of the tools used by M-lab ( and A similar tool is NPAD, with is more targeted at network experts Both require server side to be running a web100 kernel 7/12/10 ESnet Template Examples 52
53 Netalyzr: Netalyzr is a comprehensive network measurement and debugging tool built as a Java Applet Kreibich, Weaver, Nechaev and Paxson A suite of custom servers hosted primarily using Amazon EC2 Multiple properties checked NAT properties: Presence. Port Renumbering, DNS proxies IP properties: Blacklist membership Network properties: Latency, Bandwidth, Buffering, IPv6 capability, Path MTU Service properties: Outbound port filtering and proxies on a large list of services HTTP properties: Hidden HTTP proxies or caches. Correct operation of proxies and caches DNS properties: Latency of lookups, glue policy, DNS wildcarding, name validation, DNS MTU, DNSSEC readiness Host properties: Browser identity, Java version, clock accuracy 53
54 New ESnet Diagnostic Tool: 10 Gbps IO Tester 16 disk raid array: capable of > 10 Gbps host to host, disk to disk Runs anonymous read-only GridFTP Accessible to anyone on any R&E network worldwide 1 deployed on now (west coast, USA) 2 more (midwest and east coast) by end of summer Already used to debug many problems Will soon be registered in perfsonar gls See:
55 Part 2: Joe Metzger
56 Network troubleshooting
57 Troubleshooting Assumptions What can you conclude from the following? Tests across campus are good Tests to the regional are OK Tests to the nearest backbone hub are OK to poor Tests to distant backbone nodes are very bad Is the backbone congested? 7/12/10 Joint Techs, Summer
58 Troubleshooting Assumptions What can you conclude from the following? Tests across campus are good Tests to the regional are OK Tests to the nearest backbone hub are OK to poor Tests to distant backbone nodes are very bad Throughput is inversely proportional to RTT What can cause this? - Packet loss close to the source due to media errors - Packet loss close to the source due to overflowing queues - Un-tuned TCP 7/12/10 Joint Techs, Summer
59 Where are common problems? Source Campus Congested or faulty links between domains Backbone Low level loss, shallow queues inside domains with small RTT Destination Campus S D NREN Regional Firewall at campus edge
60 Local testing will not find all problems Source Campus S Performance is poor when RTT exceeds 20 ms R&E Backbone Performance is good when RTT is < 20 ms Destination Campus D Regional Regional
61 Soft Network Failures Soft failures are where basic connectivity functions, but high performance is not possible. TCP was intentionally designed to hide all transmission errors from the user: As long as the TCPs continue to function properly and the internet system does not become completely partitioned, no transmission errors will affect the users. (From IEN 129, RFC 716) Some soft failures only affect high bandwidth long RTT flows. Hard failures are easy to detect & fix soft failures can lie hidden for years! One network problem can often mask others
62 Common Soft Failures Small Queue Tail Drop Switches not able to handle the long packet trains prevalent in long RTT sessions and local cross traffic at the same time Un-intentional Rate Limiting Processor-based switching on routers due to faults, acl s, or misconfiguration Security Devices - E.g.: 10X improvement by turning off Cisco Reflexive ACL Random Packet Loss Bad fibers or connectors Low light levels due to amps/interfaces failing Duplex mismatch
63 Building a Global Network Diagnostic Framework
64 Addressing the Problem Deploy performance testing and monitoring systems throughout the network Give users access through a standard mechanism: perfsonar Sidebar on perfsonar terminology: perfsonar: standardized schema, protocols, APIs perfsonar-mdm: GÉANT Implementation primarily Java perfsonar-ps: Internet2/ESnet/more Implementation, primarily Perl PS-Toolkit: Easy to install Packaging of perfsonar-ps
65 Components of a Global Diagnostic Service Globally accessible measurement services Support for both active probes and passive results (SNMP) In particular: throughput testing servers - Recommended tool for this is bwctl Includes controls to prevent DDOS attacks Services must be registered in a globally accessible lookup service Open to the entire R&E network community Ideally light-weight authentication and authorization
66 Addressing the Problem: perfsonar perfsonar - an open, web-services-based framework for: running network tests collecting and publishing measurement results ESnet is: Deploying the framework across the science community Encouraging people to deploy known good measurement points near domain boundaries - known good = hosts that are well configured, enough memory and CPU to drive the network, proper TCP tuning, clean path, etc. Using the framework to find and correct soft network failures.
67 perfsonar Services Lookup Service gls Global lookup service used to find services hls Home lookup service for registering local perfsonar metadata Measurement Archives (data publication) SNMP MA Interface Data psb MA -- Scheduled bandwidth and latency data PS-Toolkit includes these measurement tools: BWCTL: network throughput OWAMP: network loss, delay, and jitter PINGER: network loss and delay PS-Toolkit includes these Troubleshooting Tools NDT (TCP analysis, duplex mismatch, etc.) NPAD (TCP analysis, router queuing analysis, etc)
68 ESNet PerfSONAR Deployment
69 ESnet Deployment Activities ESnet has deployed OWAMP and BWCTL servers next to all backbone routers, and at all 10Gb connected sites 27 locations deployed, many more planned Full list of active services at: Instructions on using these services for network troubleshooting: These services have proven extremely useful to help debug a number of problems
70 Global PerfSONAR-PS Deployments Based on global lookup service (gls) registration, May 2010: currently deployed in over 100 locations ~ 100 bwctl and owamp servers ~ 60 active probe measurement archives ~ 25 SNMP measurement archives Countries include: USA, Australia, Hong Kong, Argentina, Brazil, Uruguay, Guatemala, Japan, China, Canada, Netherlands, Switzerland US Atlas Deployment Monitoring all Tier 1 to Tier 2 connections For current list of public services, see: 70
71 SAMPLE results
72 Sample Results Heavily used path: probe traffic is scavenger service Asymmetric Results: different TCP stacks? 72
73 Sample Results: Finding/Fixing soft failures Rebooted router with full route table Gradual failure of optical line card
74 Effective perfsonar Deployment Strategies 74
75 Levels of perfsonar deployment ESnet classifies perfsonar deployments into 3 "levels": Level 1: Run a bwctl server that is registered in the perfsonar Lookup Service. This allows remote sites and ESnet engineers to run tests to your site. Level 2: Configure "perfsonar BOUY" to run regularly scheduled tests to/from your host. This allows you to establish a performance baseline, and to determine when performance changes. Level 3: Full set of perfsonar services deployed (everything on the PS-Toolkit bootable CD) 75
76 Deploying perfsonar-ps Tools In Under 30 Minutes There are two easy ways to deploy a perfsonar-ps host Level 1 perfsonar-ps install Build a Linux machine as you normally would (configure TCP properly! See: Go through the Level 1 HOWTO Simple, fewer features, runs on a standard Linux build Use the perfsonar-ps Performance Toolkit bootable CD Most of the configuration via Web GUI More features (level 2 or 3), runs from CD Simple installation package coming soon 76
77 Measurement Recommendations to DOE Sites Deploy perfsonar-ps based test tools At Site border - Use to rule out WAN issues Near important end system - Use to rule out LAN issues Use it to: Find & fix current local problems Identify when they re-occur Set user expectations by quantifying your network services 77
78 Sample Site Deployment 78
79 Developing a Measurement Plan What are you going to measure? Achievable bandwidth regional destinations important collaborators times per day to each destination - 20 second tests within a region, longer across the Atlantic or Pacific Loss/Availability/Latency - OWAMP: ~10 collaborators over diverse paths - PingER: use to monitor paths to collaborators who don t support owamp Interface Utilization & Errors What are you going to do with the results? NAGIOS Alerts Reports to user community Post to Website 79
80 Importance of Regular Testing You can t wait for users to report problems and then fix them (soft failures can go unreported for years!) Things just break sometimes Failing optics Somebody messed around in a patch panel and kinked a fiber Hardware goes bad Problems that get fixed have a way of coming back System defaults come back after hardware/software upgrades New employees may not know why the previous employee set things up a certain way and back out fixes Important to continually collect, archive, and alert on active throughput test results
81 Router Tuning
82 Network Architecture Issues Most LANs are not purpose-built for science traffic they carry many types of traffic Desktop machines, laptops, wireless VOIP HVAC control systems Financial systems, HR Some science data coming from someplace Bulk data transfer traffic is typically very different than enterprise traffic IE, Voip vs Bulk Data: VOIP: Deep Queuing is bad, drops are OK Bulk Data: Deep Queuing is OK, drops are bad.
83 LAN Design Recommendations 1. Separate desktop/enterprise traffic from bulk data transfer traffic Security requirements & firewall requirements differ. Many enterprise firewalls do not do a good job with bulk data transfer streams 2. Separate local area traffic from wide area traffic Most TCP stacks do not do share bandwidth evenly between short & long RTT time traffic. Short RTT flows cause congestion & then recover quickly. Long RTT flows don t recover as quickly (or at all sometimes) 3. Put perfsonar measurement points on the boarder & near important data sources & sinks. Support measurement both to the campus border (from both inside & outside) Support measurements all the way to the important resources. 7/12/10 Joint Techs, Summer
84 Science vs. Enterprise Networks Science and Enterprise (change to commodity or similar) network requirements are in conflict One possible remedy: Split the network at the border into an Enterprise network, and a Science network. Put the Enterprise security perimeter at the edge of the enterprise network Use security tools compatible with high speed science flows on the science network. Another Option Build a separate enclave at the external border supporting dedicated systems designed & tuned wide-area data transfer
85 Separate Enterprise and Science Networks
86 Internal / External Traffic Separation
87 Router and Switch Configuration Buffer/queue management TCP traffic is bursty Coincident bursts destined for a common egress port cause momentary oversubscription of the output queue Traffic entering via a 10G interface and leaving via a 1G interface can cause oversubscription of output queue Momentary oversubscription of output queues causes packet loss Default configuration of many devices is inadequate Example: Cisco commands sho int sum check for output queue drops hold-queue 4096 out change from default 40-packet output queue depth See
88 Output Queue Oversubscription
89 Router and Switch Issues Need to have adequate buffering to handle TCP bursts Long fat pipes mean large TCP windows which means large wire-speed bursts Changes in speed (e.g. 10Gbps to 1Gbps) Many devices do not have sufficient output queue resources to handle bulk data flows Need to understand device capabilities When purchasing new equipment, require adequate buffering Many defaults assume a different traffic profile (e.g. millions of web browsers) Drop traffic at first sign of oversubscription Makes TCP back off because of packet loss Protects flows such as VOIP and videoconferencing (remember the enterprise traffic?) Non-default configuration is necessary
90 Network Architecture Summary Build a network to support bulk data transfers with data transfer in mind Don t just plug it in anyplace Avoid traversing Enterprise infrastructure if you can Connect your bulk transfer resources as close to the border router as you can Configure routers and switches for adequate buffering Watch drop counters (e.g. sho int sum or sho int queue) Watch error counters If you have to, collocate your data server near the border router On-site transfers to your server in another building will usually be high performance due to low latency WAN transfers bypass the Enterprise infrastructure Might be better for network administrators as well (no need to upgrade all the switches in the path to accommodate science traffic)
91 Jumbo Frames Standard Ethernet packet is 1500 bytes (aka: MTU) Most gigabit Ethernet hardware now supports jumbo frames (jumbo packet) up to 9 KBytes Helps performance by reducing the number of host interrupts Esnet, Internet2, and GEANT are all 9KB jumbo-clean - But many sites have not yet enabled Jumbo Frames - Some jumbo frame implementations do not interoperate Ethernet speeds increased 3000x since the 1500 byte frame was defined - Computers now have to work 3000x harder to fill the network To see if your end-to-end path is jumbo clean: ping -M do -s (FreeBSD) ping -D -s header math: 20 bytes IP + 8 bytes ICMP bytes payload = 9000
92 Putting it all Together
93 Putting It All Together Build dedicated hosts for WAN data transfers Use the right tools: GridFTP, BBCP, HPN-SSH Make sure TCP parameters are configured Make sure the backend or disk subsystems are properly tuned Put it in the right place in your network Deploy at least a Level 1 perfsonar host Test and measurement are critical Even if you re fabulously lucky and don t need it today, you ll need it tomorrow Don t stop debugging until everything works end-to-end over long RTT paths!
94 Conclusions The wizard gap is starting to close (slowly) If max TCP autotuning buffers are increased Tuning TCP is not easy! no single solution fits all situations - need to be careful to set TCP buffers properly - sometimes parallel streams help throughput, sometimes they hurt Autotuning helps a lot Lots of options for bulk data tools Choose the one that fills your requirements Don t use unpatched scp!
95 More Information
Network Monitoring with the perfsonar Dashboard
Network Monitoring with the perfsonar Dashboard Andy Lake Brian Tierney ESnet Advanced Network Technologies Group TIP2013 Honolulu HI January 15, 2013 Overview perfsonar overview Dashboard history and
Throughput Issues for High-Speed Wide-Area Networks
Throughput Issues for High-Speed Wide-Area Networks Brian L. Tierney ESnet Lawrence Berkeley National Laboratory http://fasterdata.es.net/ (HEPIX Oct 30, 2009) Why does the Network seem so slow? TCP Performance
TCP Tuning Techniques for High-Speed Wide-Area Networks. Wizard Gap
NFNN2, 20th-21st June 2005 National e-science Centre, Edinburgh TCP Tuning Techniques for High-Speed Wide-Area Networks Distributed Systems Department Lawrence Berkeley National Laboratory http://gridmon.dl.ac.uk/nfnn/
Iperf Tutorial. Jon Dugan <[email protected]> Summer JointTechs 2010, Columbus, OH
Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH Outline What are we measuring? TCP Measurements UDP Measurements Useful tricks Iperf Development What are we measuring? Throughput?
Campus Network Design Science DMZ
Campus Network Design Science DMZ Dale Smith Network Startup Resource Center [email protected] The information in this document comes largely from work done by ESnet, the USA Energy Sciences Network see
EVALUATING NETWORK BUFFER SIZE REQUIREMENTS
EVALUATING NETWORK BUFFER SIZE REQUIREMENTS for Very Large Data Transfers Michael Smitasin Lawrence Berkeley National Laboratory (LBNL) Brian Tierney Energy Sciences Network (ESnet) [ 2 ] Example Workflow
TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.
TCP Labs WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.2015 Hands-on session We ll explore practical aspects of TCP Checking the effect
TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE
TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE Original problem IT-DB was backuping some data to Wigner CC. High transfer rate was required in order to avoid service degradation. 4G out of... 10G
Tier3 Network Issues. Richard Carlson May 19, 2009 [email protected]
Tier3 Network Issues Richard Carlson May 19, 2009 [email protected] Internet2 overview Member organization with a national backbone infrastructure Campus & Regional network members National and International
Fundamentals of Data Movement Hardware
Fundamentals of Data Movement Hardware Jason Zurawski ESnet Science Engagement [email protected] CC-NIE PI Workshop April 30 th 2014 With contributions from S. Balasubramanian, G. Bell, E. Dart, M. Hester,
Deploying distributed network monitoring mesh
Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites Phil DeMar, Maxim Grigoriev Fermilab Joe Metzger, Brian Tierney ESnet Martin Swany University of Delaware Jeff Boote, Eric
High-Speed TCP Performance Characterization under Various Operating Systems
High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,
Debugging With Netalyzr
Debugging With Netalyzr Christian Kreibich (ICSI), Nicholas Weaver (ICSI), Boris Nechaev (HIIT/TKK), and Vern Paxson (ICSI & UC Berkeley) 1 What Is Netalyzr?! Netalyzr is a comprehensive network measurement
Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade
Application Note Windows 2000/XP TCP Tuning for High Bandwidth Networks mguard smart mguard PCI mguard blade mguard industrial mguard delta Innominate Security Technologies AG Albert-Einstein-Str. 14 12489
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment TrueSpeed VNF provides network operators and enterprise users with repeatable, standards-based testing to resolve complaints about
perfsonar: End-to-End Network Performance Verification
perfsonar: End-to-End Network Performance Verification Toby Wong Sr. Network Analyst, BCNET Ian Gable Technical Manager, Canada Overview 1. IntroducGons 2. Problem Statement/Example Scenario 3. Why perfsonar?
Overview of Network Measurement Tools
Overview of Network Measurement Tools Jon M. Dugan Energy Sciences Network Lawrence Berkeley National Laboratory NANOG 43, Brooklyn, NY June 1, 2008 Networking for the Future of Science
Network Probe. Figure 1.1 Cacti Utilization Graph
Network Probe Description The MCNC Client Network Engineering group will install several open source network performance management tools on a computer provided by the LEA or charter school to build a
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Measure wireless network performance using testing tool iperf
Measure wireless network performance using testing tool iperf By Lisa Phifer, SearchNetworking.com Many companies are upgrading their wireless networks to 802.11n for better throughput, reach, and reliability,
TCP tuning guide for distributed application on wide area networks 1.0 Introduction
TCP tuning guide for distributed application on wide area networks 1.0 Introduction Obtaining good TCP throughput across a wide area network usually requires some tuning. This is especially true in high-speed
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
Infrastructure for active and passive measurements at 10Gbps and beyond
Infrastructure for active and passive measurements at 10Gbps and beyond Best Practice Document Produced by UNINETT led working group on network monitoring (UFS 142) Author: Arne Øslebø August 2014 1 TERENA
End-to-End Network/Application Performance Troubleshooting Methodology
End-to-End Network/Application Performance Troubleshooting Methodology Wenji Wu, Andrey Bobyshev, Mark Bowden, Matt Crawford, Phil Demar, Vyto Grigaliunas, Maxim Grigoriev, Don Petravick Fermilab, P.O.
Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage
Lecture 15: Congestion Control CSE 123: Computer Networks Stefan Savage Overview Yesterday: TCP & UDP overview Connection setup Flow control: resource exhaustion at end node Today: Congestion control Resource
Integration of Network Performance Monitoring Data at FTS3
Integration of Network Performance Monitoring Data at FTS3 July-August 2013 Author: Rocío Rama Ballesteros Supervisor(s): Michail Salichos Alejandro Álvarez CERN openlab Summer Student Report 2013 Project
Hands on Workshop. Network Performance Monitoring and Multicast Routing. Yasuichi Kitamura NICT Jin Tanaka KDDI/NICT APAN-JP NOC
Hands on Workshop Network Performance Monitoring and Multicast Routing Yasuichi Kitamura NICT Jin Tanaka KDDI/NICT APAN-JP NOC July 18th TEIN2 Site Coordination Workshop Network Performance Monitoring
Allocating Network Bandwidth to Match Business Priorities
Allocating Network Bandwidth to Match Business Priorities Speaker Peter Sichel Chief Engineer Sustainable Softworks [email protected] MacWorld San Francisco 2006 Session M225 12-Jan-2006 10:30 AM -
Names & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding
Names & Addresses EE 122: IP Forwarding and Transport Protocols Scott Shenker http://inst.eecs.berkeley.edu/~ee122/ (Materials with thanks to Vern Paxson, Jennifer Rexford, and colleagues at UC Berkeley)
RFC 6349 Testing with TrueSpeed from JDSU Experience Your Network as Your Customers Do
RFC 6349 Testing with TrueSpeed from JDSU Experience Your Network as Your Customers Do RFC 6349 is the new transmission control protocol (TCP) throughput test methodology that JDSU co-authored along with
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page
The Science DMZ. Eli Dart, Network Engineer Joe Metzger, Network Engineer ESnet Engineering Group. LHCOPN / LHCONE meeting. Internet2, Washington DC
The Science DMZ Eli Dart, Network Engineer Joe Metzger, Network Engineer ESnet Engineering Group LHCOPN / LHCONE meeting Internet2, Washington DC June 13 2011 Overview Science Needs Data Deluge, new science
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,
Final for ECE374 05/06/13 Solution!!
1 Final for ECE374 05/06/13 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam taker -
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)
Lecture Objectives Wireless and Mobile Systems Design Lecture 07 Mobile Networks: TCP in Wireless Networks Describe TCP s flow control mechanism Describe operation of TCP Reno and TCP Vegas, including
A Tutorial on Configuring and Deploying GridFTP for Managing Data Movement in Grid/HPC Environments
A Tutorial on Configuring and Deploying GridFTP for Managing Data Movement in Grid/HPC Environments John Bresnahan Michael Link Rajkumar Kettimuthu Dan Fraser Argonne National Laboratory University of
Ten top problems network techs encounter
Ten top problems network techs encounter Networks today have evolved quickly to include business critical applications and services, relied on heavily by users in the organization. In this environment,
UDR: UDT + RSYNC. Open Source Fast File Transfer. Allison Heath University of Chicago
UDR: UDT + RSYNC Open Source Fast File Transfer Allison Heath University of Chicago Motivation for High Performance Protocols High-speed networks (10Gb/s, 40Gb/s, 100Gb/s,...) Large, distributed datasets
Pump Up Your Network Server Performance with HP- UX
Pump Up Your Network Server Performance with HP- UX Paul Comstock Network Performance Architect Hewlett-Packard 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject
Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan
Application Level Congestion Control Enhancements in High BDP Networks Anupama Sundaresan Organization Introduction Motivation Implementation Experiments and Results Conclusions 2 Developing a Grid service
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
The Problem with TCP. Overcoming TCP s Drawbacks
White Paper on managed file transfers How to Optimize File Transfers Increase file transfer speeds in poor performing networks FileCatalyst Page 1 of 6 Introduction With the proliferation of the Internet,
How To Connect To Bloomerg.Com With A Network Card From A Powerline To A Powerpoint Terminal On A Microsoft Powerbook (Powerline) On A Blackberry Or Ipnet (Powerbook) On An Ipnet Box On
Transport and Security Specification 15 July 2015 Version: 5.9 Contents Overview 3 Standard network requirements 3 Source and Destination Ports 3 Configuring the Connection Wizard 4 Private Bloomberg Network
Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput
Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput Guojun Jin Brian Tierney Distributed Systems Department Lawrence Berkeley National Laboratory 1 Cyclotron
Improving Quality of Service
Improving Quality of Service Using Dell PowerConnect 6024/6024F Switches Quality of service (QoS) mechanisms classify and prioritize network traffic to improve throughput. This article explains the basic
Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering
Internet Firewall CSIS 4222 A combination of hardware and software that isolates an organization s internal network from the Internet at large Ch 27: Internet Routing Ch 30: Packet filtering & firewalls
GlobalSCAPE DMZ Gateway, v1. User Guide
GlobalSCAPE DMZ Gateway, v1 User Guide GlobalSCAPE, Inc. (GSB) Address: 4500 Lockhill-Selma Road, Suite 150 San Antonio, TX (USA) 78249 Sales: (210) 308-8267 Sales (Toll Free): (800) 290-5054 Technical
Firewall Server 7.2. Release Notes. What's New in Firewall Server 7.2
Firewall Server 7.2 Release Notes BorderWare Technologies is pleased to announce the release of version 7.2 of the Firewall Server. This release includes the following new features and improvements. What's
APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
Challenges of Sending Large Files Over Public Internet
Challenges of Sending Large Files Over Public Internet CLICK TO EDIT MASTER TITLE STYLE JONATHAN SOLOMON SENIOR SALES & SYSTEM ENGINEER, ASPERA, INC. CLICK TO EDIT MASTER SUBTITLE STYLE OUTLINE Ø Setting
1000Mbps Ethernet Performance Test Report 2014.4
1000Mbps Ethernet Performance Test Report 2014.4 Test Setup: Test Equipment Used: Lenovo ThinkPad T420 Laptop Intel Core i5-2540m CPU - 2.60 GHz 4GB DDR3 Memory Intel 82579LM Gigabit Ethernet Adapter CentOS
Transport Layer Protocols
Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements
Chapter 15: Advanced Networks
Chapter 15: Advanced Networks IT Essentials: PC Hardware and Software v4.0 1 Determine a Network Topology A site survey is a physical inspection of the building that will help determine a basic logical
Applications. Network Application Performance Analysis. Laboratory. Objective. Overview
Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying
Question: 3 When using Application Intelligence, Server Time may be defined as.
1 Network General - 1T6-521 Application Performance Analysis and Troubleshooting Question: 1 One component in an application turn is. A. Server response time B. Network process time C. Application response
Mike Canney Principal Network Analyst getpackets.com
Mike Canney Principal Network Analyst getpackets.com 1 My contact info contact Mike Canney, Principal Network Analyst, getpackets.com [email protected] 319.389.1137 2 Capture Strategies capture Capture
First Midterm for ECE374 03/24/11 Solution!!
1 First Midterm for ECE374 03/24/11 Solution!! Note: In all written assignments, please show as much of your work as you can. Even if you get a wrong answer, you can get partial credit if you show your
CS640: Introduction to Computer Networks. Applications FTP: The File Transfer Protocol
CS640: Introduction to Computer Networks Aditya Akella Lecture 4 - Application Protocols, Performance Applications FTP: The File Transfer Protocol user at host FTP FTP user client interface local file
ANI Network Testbed Update
ANI Network Testbed Update Brian Tierney, ESnet, Joint Techs, Columbus OH, July, 2010 ANI: Advanced Network Initiative Project Start Date: September, 2009 Funded by ARRA for 3 years Designed, built, and
TCP and Wireless Networks Classical Approaches Optimizations TCP for 2.5G/3G Systems. Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme
Chapter 2 Technical Basics: Layer 1 Methods for Medium Access: Layer 2 Chapter 3 Wireless Networks: Bluetooth, WLAN, WirelessMAN, WirelessWAN Mobile Networks: GSM, GPRS, UMTS Chapter 4 Mobility on the
Test Equipment Depot - 800.517.8431-99 Washington Street Melrose, MA 02176 - TestEquipmentDepot.com. Application Advisor
Test Equipment Depot - 800.517.8431-99 Washington Street Melrose, MA 02176 - TestEquipmentDepot.com NetAlly Application Advisor Monitor End User Experience for Local and Remote Users, Distributed Sites
About Firewall Protection
1. This guide describes how to configure basic firewall rules in the UTM to protect your network. The firewall then can provide secure, encrypted communications between your local network and a remote
Guideline for setting up a functional VPN
Guideline for setting up a functional VPN Why do I want a VPN? VPN by definition creates a private, trusted network across an untrusted medium. It allows you to connect offices and people from around the
The Case Against Jumbo Frames. Richard A Steenbergen <[email protected]> GTT Communications, Inc.
The Case Against Jumbo Frames Richard A Steenbergen GTT Communications, Inc. 1 What s This All About? What the heck is a Jumbo Frame? Technically, the IEEE 802.3 Ethernet standard defines
Data Networks Summer 2007 Homework #3
Data Networks Summer Homework # Assigned June 8, Due June in class Name: Email: Student ID: Problem Total Points Problem ( points) Host A is transferring a file of size L to host B using a TCP connection.
AKAMAI WHITE PAPER. Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling
AKAMAI WHITE PAPER Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling Delivering Dynamic Web Content in Cloud Computing Applications 1 Overview
PANDORA FMS NETWORK DEVICE MONITORING
NETWORK DEVICE MONITORING pag. 2 INTRODUCTION This document aims to explain how Pandora FMS is able to monitor all network devices available on the marke such as Routers, Switches, Modems, Access points,
Internet Services. Amcom. Support & Troubleshooting Guide
Amcom Internet Services This Support and Troubleshooting Guide provides information about your internet service; including setting specifications, testing instructions and common service issues. For further
EVERYTHING A DBA SHOULD KNOW
EVERYTHING A DBA SHOULD KNOW ABOUT TCPIP NETWORKS Chen (Gwen),HP Software-as-a-Service 1. TCP/IP Problems that DBAs Can Face In this paper I ll discuss some of the network problems that I ve encountered
WAN OPTIMIZATION. Srinivasan Padmanabhan (Padhu) Network Architect Texas Instruments, Inc.
WAN OPTIMIZATION Srinivasan Padmanabhan (Padhu) Network Architect Texas Instruments, Inc. Disclaimer Please be aware that the concepts and opinions expressed in the following presentation are those of
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
TCP in Wireless Mobile Networks
TCP in Wireless Mobile Networks 1 Outline Introduction to transport layer Introduction to TCP (Internet) congestion control Congestion control in wireless networks 2 Transport Layer v.s. Network Layer
Hosting Solutions Made Simple. Managed Services - Overview and Pricing
Hosting Solutions Made Simple Managed Services - Overview and Pricing NETRACKservers Internet Security Package: NETRACKservers's Internet Security Package is an ideal security service for business that
Gigabit Ethernet Design
Gigabit Ethernet Design Laura Jeanne Knapp Network Consultant 1-919-254-8801 [email protected] www.lauraknapp.com Tom Hadley Network Consultant 1-919-301-3052 [email protected] HSEdes_ 010 ed and
Data Movement and Storage. Drew Dolgert and previous contributors
Data Movement and Storage Drew Dolgert and previous contributors Data Intensive Computing Location Viewing Manipulation Storage Movement Sharing Interpretation $HOME $WORK $SCRATCH 72 is a Lot, Right?
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Mobile Communications Chapter 9: Mobile Transport Layer
Mobile Communications Chapter 9: Mobile Transport Layer Motivation TCP-mechanisms Classical approaches Indirect TCP Snooping TCP Mobile TCP PEPs in general Additional optimizations Fast retransmit/recovery
How To Monitor And Test An Ethernet Network On A Computer Or Network Card
3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel
Measuring Wireless Network Performance: Data Rates vs. Signal Strength
EDUCATIONAL BRIEF Measuring Wireless Network Performance: Data Rates vs. Signal Strength In January we discussed the use of Wi-Fi Signal Mapping technology as a sales tool to demonstrate signal strength
Network Defense Tools
Network Defense Tools Prepared by Vanjara Ravikant Thakkarbhai Engineering College, Godhra-Tuwa +91-94291-77234 www.cebirds.in, www.facebook.com/cebirds [email protected] What is Firewall? A firewall
DEPLOYMENT GUIDE Version 1.1. Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager
DEPLOYMENT GUIDE Version 1.1 Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager Table of Contents Table of Contents Configuring BIG-IP WOM with Oracle Database
TCP Adaptation for MPI on Long-and-Fat Networks
TCP Adaptation for MPI on Long-and-Fat Networks Motohiko Matsuda, Tomohiro Kudoh Yuetsu Kodama, Ryousei Takano Grid Technology Research Center Yutaka Ishikawa The University of Tokyo Outline Background
TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to
Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.
Visualizations and Correlations in Troubleshooting
Visualizations and Correlations in Troubleshooting Kevin Burns Comcast [email protected] 1 Comcast Technology Groups Cable CMTS, Modem, Edge Services Backbone Transport, Routing Converged Regional
