Troubleshoo*ng Network Performance Issues with Ac*ve Monitoring
|
|
|
- Reynard Ball
- 10 years ago
- Views:
Transcription
1 Troubleshoo*ng Network Performance Issues with Ac*ve Monitoring NANOG 64 Brian Tierney, ESnet, June 2, 2015 This document is a result of work by the perfsonar Project (hnp:// and is licensed under CC BY- SA 4.0 (hnps://crea*vecommons.org/licenses/by- sa/4.0/).
2 My Background Who Am I? Lawrence Berkeley Na*onal Lab ESnet Big Science Data 6/2/15 2
3 Lawrence Berkeley Na*onal Laboratory 6/2/15 3
4 Energy Sciences Network Connects Department of Energy Na*onal Laboratories to universi*es and research ins*tu*ons around the world Many sites with 100G connec*ons to ESnet today - Berkeley, Livermore, Stanford, Fermi, Brookhaven, Oakridge, Argonne 6/2/15 4
5 ESnet / DOE Na,onal Lab Network Profile Small- ish numbers of very large flows over very long distances: Between California, Illinois, New York, Tennessee, Switzerland High- speed Access links - 100G sites connected to 100G core Nx10G hosts, future Nx40G hosts, dedicated to Data Transfer GridFTP / Globus Online / Parallel FTP LHC detectors to data centers around the world (future 180Gbps) Electron microscopes to supercomputers (20k 100k FPS per camera) 6/2/15 5
6 Introduc*on & Mo*va*on NANOG 64 Brian Tierney, ESnet June 2, 2015 This document is a result of work by the perfsonar Project (hnp:// and is licensed under CC BY- SA 4.0 (hnps://crea*vecommons.org/licenses/by- sa/4.0/).
7 perfsonar Goals Originally developed to help troubleshoot large science data transfers E.g.: LHC at CERN; Large genomic data sets, etc Useful for any network doing large file transfers. E.g.: Into Amazon EC2 Large HD movies (Hollywood post produc*on) Many new big data applica*ons 6/2/15 7
8 What is perfsonar? perfsonar is a tool to: Set (hopefully raise) network performance expecta*ons Find network problems ( sok failures ) Help fix these problems All in mul*- domain environments These problems are all harder when mul*ple networks are involved perfsonar is provides a standard way to publish ac*ve and passive monitoring data This data is interes*ng to network researchers as well as network operators 6/2/15 8
9 Problem Statement In prac*ce, performance issues are prevalent and distributed. When a network is underperforming or errors occur, it is difficult to iden*fy the source, as problems can happen anywhere, in any domain. Local- area network tes*ng is not sufficient, as errors can occur between networks. 6/2/15 9
10 Where Are The Problems? Source Campus S Congested or faulty links between domains Backbone Latency dependant problems inside domains with small RTT Des*na*on Campus D Congested intra- campus links NREN Regional 6/2/15 10
11 Local Tes*ng Will Not Find Everything Performance is poor when RTT exceeds ~10 ms Performance is good when RTT is < ~10 ms Source Campus S R&E Backbone Des,na,on Campus D Regional Regional Switch with small buffers 6/2/15 11
12 A small amount of packet loss makes a huge difference in TCP performance Local (LAN) Metro Area With loss, high performance beyond metro distances is essentially impossible Regional Con*nental Interna*onal Measured (TCP Reno) Measured (HTCP) Theoretical (TCP Reno) Measured (no loss) 6/2/15 12
13 Sok Network Failures Sok failures are where basic connec*vity func*ons, but high performance is not possible. TCP was inten*onally designed to hide all transmission errors from the user: As long as the TCPs con*nue to func*on properly and the internet system does not become completely par**oned, no transmission errors will affect the users. (From IEN 129, RFC 716) Some sok failures only affect high bandwidth long RTT flows. Hard failures are easy to detect & fix sok failures can lie hidden for years! One network problem can oken mask others Hardware counters can lie! 6/2/15 13
14 Hard vs. Sok Failures Hard failures are the kind of problems every organiza*on understands Fiber cut Power failure takes down routers Hardware ceases to func*on Classic monitoring systems are good at aler*ng hard failures i.e., NOC sees something turn red on their screen Engineers paged by monitoring systems Sok failures are different and oken go undetected Basic connec*vity (ping, traceroute, web pages, ) works Performance is just poor How much should we care about sok failures? 6/2/15 14
15 6/2/15 15
16 Causes of Packet Loss Network Conges*on Easy to confirm via SNMP, easy to fix with $$ This is not a sok failure, but just a network capacity issue Oken people assume conges*on is the issue when it fact it is not. Under- buffered switch dropping packets Hard to confirm Under- powered firewall dropping packets Hard to confirm Dirty fibers or connectors, failing op*cs/light levels Some*mes easy to confirm by looking at error counters in the routers Overloaded or slow receive host dropping packets Easy to confirm by looking at CPU load on the host 6/2/15 16
17 Sample Sok Failure: failing op*cs normal performance Gb/s degrading performance repair one month 6/2/15 17
18 Sample Sok Failure: Under- powered Firewall Inside the firewall One direc*on severely impacted by firewall Not useful for science data Outside the firewall Good performance in both direc*ons 6/2/15 18
19 Sample Sok Failure: Host Tuning 6/2/15 19
20 Sok Failure: Under- buffered Switches Average TCP results, various switches Buffers per 10G egress port, 2x parallel TCP streams, 50ms simulated RTT, 2Gbps UDP background traffic 1MB Brocade MLXe 1 9MB Arista MB Cisco MB Brocade MLXe 1 90MB Cisco MB Cisco VOQ Arista /2/15 20
21 What is perfsonar? perfsonar is a tool to: Set network performance expecta*ons Find network problems ( sok failures ) Help fix these problems All in mul*- domain environments These problems are all harder when mul*ple networks are involved perfsonar is provides a standard way to publish ac*ve and passive monitoring data perfsonar = Measurement Middleware You can t fix what you can t find You can t find what you can t see perfsonar lets you see 6/2/15 21
22 perfsonar History perfsonar can trace its origin to the Internet2 End 2 End performance Ini*a*ve from the year What has changed since 2000? The Good News: TCP is much less fragile; Cubic is the default CC alg, autotuning is and larger TCP buffers are everywhere Reliable parallel transfers via tools like Globus Online High- performance UDP- based commercial tools like Aspera more good news in latest Linux kernel, but it will take 3-4 years before this is widely deployed The Bad News: The wizard gap is s*ll large Jumbo frame use is s*ll small Under- buffered and switches and routers are s*ll common Under- powered/misconfigured firewalls are common Sok failures s*ll go undetected for months User performance expecta*ons are s*ll too low 6/2/15 22
23 The perfsonar collabora*on The perfsonar collabora*on is a Open Source project lead by ESnet, Internet2, Indiana University, and GEANT. Each organiza*on has commined 1.5 FTE effort to the project Plus addi*onal help from many others in the community (OSG, RNP, SLAC, and more) The perfsonar Roadmap is influence by requests on the project issue tracker annual user surveys sent to everyone on the user list regular mee*ngs with VO using perfsonar such as the WLCG and OSG discussions at various perfsonar related workshops Based on the above, every 6-12 months the perfsonar governance group meets to priori*ze features based on: impact to the community level of effort required to implement and support availability of someone with the right skill set for the task 6/2/15 23
24 perfsonar Toolkit The perfsonar Toolkit is an open source implementa*on and packaging of the perfsonar measurement infrastructure and protocols hnp:// All components are available as RPMs, and bundled into a CentOS 6- based ne*nstall (Debian support in next release) perfsonar tools are much more accurate if run on a dedicated perfsonar host, not on the data server Easy to install and configure Usually takes less than 30 minutes 6/2/15 24
25 perfsonar Toolkit Components BWCTL for scheduling periodic throughout (iperf, iperf3), ping and traceroute tests OWAMP for measuring one- way latency and packet loss (RFC4856) esmond for storing measurements, summarizing results and making available via REST API Lookup service for discovering other testers On- demand test tools such as NDT (Network Diagnos*c Test) Configura*on GUIs to assist with managing all of the above Graphs to display measurement results 6/2/15 25
26 Who is running perfsonar? Currently over 1400 deployments world- wide hjp://stats.es.net/servicesdirectory/ 6/2/15 26
27 perfsonar Deployment Growth 6/2/15 27
28 Ac*ve vs. Passive Monitoring Passive monitoring systems have limita*ons Performance problems are oken only visible at the ends Individual network components (e.g. routers) have no knowledge of end- to- end state perfsonar tests the network in ways that passive monitoring systems do not More perfsonar hosts = bener network visibility There are now enough perfsonar hosts in the world to be quite useful 6/2/15 28
29 perfsonar Dashboard: Raising Expecta,ons and improving network visibility Status at- a- glance Packet loss Throughput Correctness Current live instances at: hnp://ps- dashboard.es.net/ And many more Drill- down capabili*es: Test history between hosts Ability to correlate with other events Very valuable for fault localiza*on and isola*on 6/2/15 29
30 perfsonar Hardware These days you can get a good 1U host capable of pushing 10Gbps TCP for around $500 (+10G NIC cost). See perfsonar user list And you can get a host capable of 1G for around $100! Intel Celeron- based (ARM is not fast enough) e.g.: hnp:// Item=N82E VMs are not recommended Tools work bener if can guarantee NIC isola*on 6/2/15 30
31 Ac*ve and Growing perfsonar Community Ac*ve lists and forums provide: Instant access to advice and exper*se from the community. Ability to share metrics, experience and findings with others to help debug issues on a global scale. Joining the community automa*cally increases the reach and power of perfsonar The more endpoints means exponen*ally more ways to test and discover issues, compare metrics 6/2/15 31
32 perfsonar Community The perfsonar collabora*on is working to build a strong user community to support the use and development of the sokware. perfsonar Mailing Lists Announcement Lists: hnps://mail.internet2.edu/wws/subrequest/perfsonar- announce Users List: hnps://mail.internet2.edu/wws/subrequest/perfsonar- users 6/2/15 32
33 Use Cases & Success Stories NANOG 64 Brian Tierney, ESnet, June 2, 2015 This document is a result of work by the perfsonar Project (hnp:// and is licensed under CC BY- SA 4.0 (hnps://crea*vecommons.org/licenses/by- sa/4.0/).
34 Success Stories - Failing Op*c(s) normal performance Gb/s degrading performance repair one month 6/2/15 34
35 Success Stories - Failing Op*c(s) Adding an ANenuator to a Noisy (discovered via OWAMP) Link 6/2/15 35
36 Success Stories Firewall trouble 6/2/15 36
37 Success Stories Firewall trouble Results to host behind the firewall: 6/2/15 37
38 Success Stories Firewall trouble In front of the firewall: 6/2/15 38
39 Success Stories Firewall trouble Want more proof lets look at a measurement tool through the firewall. Measurement tools emulate a well behaved applica*on Outbound, not filtered: nuttcp -T 10 -i 1 -p bwctl.newy.net.internet2.edu! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans! MB / 1.00 sec = Mbps 0 retrans!! MB / sec = Mbps 13 %TX 11 %RX 0 retrans 8.38 msrtt! 6/2/15 39
40 Success Stories Firewall trouble Inbound, filtered: nuttcp -r -T 10 -i 1 -p bwctl.newy.net.internet2.edu! MB / 1.00 sec = Mbps 13 retrans! MB / 1.00 sec = Mbps 4 retrans! MB / 1.00 sec = Mbps 6 retrans! MB / 1.00 sec = Mbps 9 retrans! MB / 1.00 sec = Mbps 8 retrans! MB / 1.00 sec = Mbps 5 retrans! MB / 1.00 sec = Mbps 3 retrans! MB / 1.00 sec = Mbps 7 retrans! MB / 1.00 sec = Mbps 7 retrans! MB / 1.00 sec = Mbps 8 retrans!! MB / sec = msrtt! Mbps 0 %TX 1 %RX 70 retrans /2/15 40
41 Success Stories - Host Tuning long path (~70ms), single stream TCP, 10G cards, tuned hosts Why the nearly 2x up*ck? Adjusted net.ipv4.tcp_rmem/wmem maximums (used in auto tuning) to 64M instead of 16M. 6/2/15 41
42 Success Stories - Fiber Cut perfsonar can't fix fiber cuts, but you can see the loss event and the latency change due to traffic re- rou*ng 6/2/15 42
43 Success Stories - Monitoring TA Links 6/2/15 43
44 Success Stories - MTU Changes 6/2/15 44
45 Success Stories - Host NIC Speed Mismatch Sending from a 10G host to a 1G host leads to unstable performance hnp://fasterdata.es.net/performance- tes*ng/troubleshoo*ng/interface- speed- mismatch/ hnp://fasterdata.es.net/performance- tes*ng/evalua*ng- network- performance/impedence- mismatch/ 6/2/15 45
46 Another Failing Line Card Example 6/2/15 46
47 Success Stories Summary Key Message(s): This type of ac*ve monitoring find problems in the network and on the end hosts. Ability to view long term trends in latency, loss, and throughput is very useful Ability to show someone a plot, and say what did you do on Day X? It broke something. Ability to show impact of an upgrade (or lack thereof). 6/2/15 47
48 Deployment & Advanced Regular Tes*ng Strategies NANOG 64 Brian Tierney, ESnet, June 2, 2015 This document is a result of work by the perfsonar Project (hnp:// and is licensed under CC BY- SA 4.0 (hnps://crea*vecommons.org/licenses/by- sa/4.0/).
49 Importance of Regular Tes*ng We can t wait for users to report problems and then fix them (sok failures can go unreported for years!) Things just break some*mes Failing op*cs Somebody messed around in a patch panel and kinked a fiber Hardware goes bad Problems that get fixed have a way of coming back System defaults come back aker hardware/sokware upgrades New employees may not know why the previous employee set things up a certain way and back out fixes Important to con*nually collect, archive, and alert on ac*ve throughput test results 6/2/15 49
50 A perfsonar Dashboard hnp://ps- dashboard.es.net (google perfsonar Maddash for other public dashboards) 6/2/15 50
51 perfsonar Deployment Loca*ons perfsonar hosts are most useful if they are deployed near key resources such as data servers More perfsonar hosts allow segments of the path to be tested separately Reduced visibility for devices between perfsonar hosts Must rely on counters or other means where perfsonar can t go 6/2/15 51
52 Example perfsonar Host Placement 6/2/15 52
53 Regular perfsonar Tests We run regular tests to check for three things TCP throughput One way packet loss and delay traceroute perfsonar has mechanisms for managing regular tes*ng between perfsonar hosts Sta*s*cs collec*on and archiving Graphs Dashboard display Integra*on with NAGIOS Many public perfsonar hosts around the world that you may be able to test against 6/2/15 53
54 Regular Tes*ng Op*ons: Beacon: Let others test to you No regular tes*ng configura*on is needed Island: Pick some hosts to test to you store the data locally. No coordina*on with others is needed Mesh: full coordina*on between you and others Use a tes*ng configura*on that includes tests to everyone, and incorporate into a dashboard 6/2/15 54
55 Regular Tes*ng - Beacon The beacon setup is typically employed by a network provider (regional, backbone, exchange point) A service to the users (allows people to test into the network) Can be configured with Layer 2 connec*vity if needed If no regular tests are scheduled, minimum requirements for local storage. Makes the most sense to enable all services (bandwidth and latency) 6/2/15 55
56 Regular Tes*ng - Island The island setup allows a site to test against any number of the perfsonar nodes around the world, and store the data locally. No coordina*on required with other sites Allows a view of near horizon tes*ng (e.g. short latency campus, regional) and far horizon (backbone network, remote collaborators). OWAMP is par*cularly useful for determining packet loss in the previous cases. 6/2/15 56
57 Regular Tes*ng - Mesh A full mesh requires more coordina*on: A full mesh means all hosts involved are running the same test configura*on A par*al mesh could mean only a small number of related hosts are running a tes*ng configura*on In either case bandwidth and latency will be valuable test cases 6/2/15 57
58 Develop a Test Plan What are you going to measure? Achievable bandwidth 2-3 regional desgnagons 4-8 important collaborators 4-6 *mes per day to each des*na*on second tests within a region, maybe longer across oceans and con*nents Loss/Availability/Latency OWAMP: ~10-20 collaborators over diverse paths Interface U*liza*on & Errors (via SNMP) What are you going to do with the results? NAGIOS Alerts Reports to user community? Internal and/or external Dashboard 6/2/15 58
59 hnp://stats.es.net/servicesdirectory/ 6/2/15 59
60 Common Ques*ons Q: Do most perfsonar hosts accept tests from anyone? A: Not most, but some do. ESnet allows owamp and iperf TCP tests from all R&E address space, but not the commercial internet. Q: Will perfsonar test traffic step on my produc*on traffic? A: TCP is designed to be friendly; ESnet tags all our internal tests as scavenger to ensure tests traffic is dropped first. But too much tes*ng can impact produc*on traffic. Need to be careful of this. Q: How can I control who can run tests to my host? A: router ACLs, host ACLs, bwctld.limits Future version of bwctl will include even more op*ons to limit tes*ng only allow iperf between 12am and 6am only allow 2 tests per day from all but the following subnets 6/2/15 60
61 Use of Measurement Tools NANOG 64 Brian Tierney, ESnet, June 2, 2015 This document is a result of work by the perfsonar Project (hnp:// and is licensed under CC BY- SA 4.0 (hnps://crea*vecommons.org/licenses/by- sa/4.0/).
62 Tool Usage All of the previous examples were discovered, debugged, and corrected through the aide of the tools that are on the perfsonar toolkit perfsonar specific tools bwctl owamp Standard Unix tools ping, traceroute, tracepath Standard Unix add- on tools Iperf, iperf3, nuncp 6/2/15 62
63 Default perfsonar Throughput Tool: iperf3 iperf3 (hnp://sokware.es.net/iperf/) is a new implementa*on if iperf from scratch, with the goal of a smaller, simpler code base, and a library version of the func*onality that can be used in other programs. Some new features in iperf3 include: reports the number of TCP packets that were retransmined and CWND reports the average CPU u*liza*on of the client and server (- V flag) support for zero copy TCP (- Z flag) JSON output format (- J flag) omit flag: ignore the first N seconds in the results On RHEL- based hosts, just type yum install iperf3 More at: hnp://fasterdata.es.net/performance- tes*ng/network- troubleshoo*ng- tools/iperf- and- iperf3/ 6/2/15 63
64 Sample iperf3 output on lossy network Performance is < 1Mbps due to heavy packet loss [ ID] Interval Transfer Bandwidth Retr Cwnd! [ 15] sec 139 MBytes 1.16 Gbits/sec KBytes! [ 15] sec 106 MBytes 891 Mbits/sec KBytes! [ 15] sec 105 MBytes 881 Mbits/sec KBytes! [ 15] sec 71.2 MBytes 598 Mbits/sec KBytes! [ 15] sec 110 MBytes 923 Mbits/sec KBytes! [ 15] sec 136 MBytes 1.14 Gbits/sec KBytes! [ 15] sec 88.8 MBytes 744 Mbits/sec KBytes! [ 15] sec 112 MBytes 944 Mbits/sec KBytes! [ 15] sec 119 MBytes 996 Mbits/sec KBytes! [ 15] sec 110 MBytes 923 Mbits/sec KBytes! 6/2/15 64
65 BWCTL BWCTL is the wrapper around all the perfsonar tools Policy specifica*on can do things like prevent tests to subnets, or allow longer tests to others. See the man pages for more details Some general notes: Use - c to specify a catcher (receiver) Use - s to specify a sender Will default to IPv6 if available (use - 4 to force IPv4 as needed, or specify things in terms of an address if your host names are dual homed) 6/2/15 65
66 bwctl features BWCTL lets you run any of the following between any 2 perfsonar nodes: iperf3, nuncp, ping, owping, traceroute, and tracepath Sample Commands: bwctl -c psmsu02.aglt2.org -s elpa-pt1.es.net -T iperf3! bwping -s atla-pt1.es.net -c ga-pt1.es.net! bwping -E -c bwtraceroute -T tracepath -c lbl-pt1.es.net -l s atla-pt1.es.net! bwping -T owamp -s atla-pt1.es.net -c ga-pt1.es.net -N i.01! 6/2/15 66
67 Host Issues: Throughput is dependent on TCP Buffers A prequel to using BWCTL throughput tests The Bandwidth Delay Product The amount of in flight data allowed for a TCP connec*on (BDP = bandwidth * round trip *me) Example: 1Gb/s cross country, ~100ms 1,000,000,000 b/s *.1 s = 100,000,000 bits 100,000,000 / 8 = 12,500,000 bytes 12,500,000 bytes / (1024*1024) ~ 12MB Most OSs have a default TCP buffer size of 4MB per flow. This is too small for long paths More details at hnps://fasterdata.es.net/host- tuning/ Host admins need to make sure there is enough TCP buffer space to keep the sender from wai*ng for acknowledgements from the receiver 6/2/15 67
68 Network Throughput Start with a defini*on: network throughput is the rate of successful message delivery over a communica*on channel Easier terms: how much data can I shovel into the network for some given amount of *me What does this tell us? Opposite of u*liza*on (e.g. its how much we can get at a given point in *me, minus what is u*lized) U*liza*on and throughput added together are capacity Tools that measure throughput are a simula*on of a real work use case (e.g. how well might bulk data movement perform) Ways to game the system Parallel streams Manual window size adjustments memory to memory tes*ng no disk involved 6/2/15 68
69 Throughput Test Tools Varie*es of throughput tools that BWCTL knows how to manage: Iperf (v2) Default for the command line (e.g. bwctl c HOST will invoke this) Some known behavioral problems (CPU hog, hard to get UDP tes*ng to be correct) Iperf3 Default for the perfsonar regular tes*ng framework, can invoke via command line switch (bwctl T iperf3 c HOST) NuNcp Different code base, can invoke via command line switch (bwctl T nuncp c HOST) More control over how the tool behaves on the host (bind to CPU/core, etc.) Similar feature set to iperf3 6/2/15 69
70 BWCTL Example (iperf) > bwctl -T iperf -f m -t 10 -i 2 -c sunn-pt1.es.net! bwctl: 83 seconds until test results available! bwctl: exec_line: /usr/bin/iperf -B s -f m -m -p t 10 -i ! bwctl: run_tool: tester: iperf! bwctl: run_tool: receiver: ! bwctl: run_tool: sender: ! bwctl: start_tool: ! ! Server listening on TCP port 5136! Binding to local address ! TCP window size: 0.08 MByte (default)! ! [ 16] local port 5136 connected with port 5136! [ ID] Interval Transfer Bandwidth! [ 16] sec 90.4 MBytes 379 Mbits/sec! [ 16] sec 689 MBytes 2891 Mbits/sec! [ 16] sec 684 MBytes 2867 Mbits/sec! [ 16] sec 691 MBytes 2897 Mbits/sec! [ 16] sec 691 MBytes 2898 Mbits/sec! [ 16] sec 2853 MBytes 2386 Mbits/sec! [ 16] MSS size 8948 bytes (MTU 8988 bytes, unknown interface)! This is what perfsonar Graphs the average of the complete test 6/2/15 70
71 BWCTL Example (iperf3) > bwctl -T iperf3 -t 10 -i 2 -c sunn-pt1.es.net!! bwctl: run_tool: tester: iperf3! bwctl: run_tool: receiver: ! bwctl: run_tool: sender: ! bwctl: start_tool: ! Test initialized! Running client! Connecting to host , port 5001! [ 17] local port connected to port 5001! [ ID] Interval Transfer Bandwidth Retransmits! [ 17] sec 430 MBytes 1.80 Gbits/sec 2! [ 17] sec 680 MBytes 2.85 Gbits/sec 0! [ 17] sec 669 MBytes 2.80 Gbits/sec 0! [ 17] sec 670 MBytes 2.81 Gbits/sec 0! [ 17] sec 680 MBytes 2.85 Gbits/sec 0! [ ID] Interval Transfer Bandwidth Retransmits! Sent! [ 17] sec 3.06 GBytes 2.62 Gbits/sec 2! Received! [ 17] sec 3.06 GBytes 2.63 Gbits/sec! iperf Done.! bwctl: stop_tool: ! This is what perfsonar Graphs the average of the complete test 6/2/15 71
72 BWCTL Example (nuncp) > bwctl -T nuttcp -f m -t 10 -i 2 -c sunn-pt1.es.net! nuttcp-t: buflen=65536, nstream=1, port=5001 tcp -> ! nuttcp-t: connect to with mss=8948, RTT= ms! nuttcp-t: send window size = 98720, receive window size = 87380! nuttcp-t: available send window = 74040, available receive window = 65535! nuttcp-r: buflen=65536, nstream=1, port=5001 tcp! nuttcp-r: send window size = 98720, receive window size = 87380! nuttcp-r: available send window = 74040, available receive window = 65535! MB / 2.00 sec = Mbps 1 retrans! MB / 2.00 sec = Mbps 0 retrans! MB / 2.00 sec = Mbps 0 retrans! MB / 2.00 sec = Mbps 0 retrans! MB / 2.00 sec = Mbps 0 retrans! nuttcp-t: MB in real seconds = KB/sec = Mbps! nuttcp-t: MB in 2.32 CPU seconds = KB/cpu sec! nuttcp-t: retrans = 1! nuttcp-t: I/O calls, msec/call = 0.21, calls/sec = ! nuttcp-t: 0.0user 2.3sys 0:10real 23% 0i+0d 768maxrss 0+2pf csw!! nuttcp-r: MB in real seconds = KB/sec = Mbps! nuttcp-r: MB in 2.36 CPU seconds = KB/cpu sec! nuttcp-r: I/O calls, msec/call = 0.18, calls/sec = ! nuttcp-r: 0.0user 2.3sys 0:10real 23% 0i+0d 770maxrss 0+4pf csw! 6/2/15 72
73 BWCTL Example (nuncp, [1%] loss) > bwctl -T nuttcp -f m -t 10 -i 2 -c sunn-pt1.es.net! bwctl: exec_line: /usr/bin/nuttcp -vv -p i T 10 -t ! nuttcp-t: buflen=65536, nstream=1, port=5004 tcp -> ! nuttcp-t: connect to with mss=8948, RTT= ms! nuttcp-t: send window size = 98720, receive window size = 87380! nuttcp-t: available send window = 74040, available receive window = 65535! nuttcp-r: send window size = 98720, receive window size = 87380! nuttcp-r: available send window = 74040, available receive window = 65535! MB / 2.00 sec = Mbps 27 retrans! MB / 2.00 sec = Mbps 4 retrans! MB / 2.00 sec = Mbps 7 retrans! MB / 2.00 sec = Mbps 13 retrans! MB / 2.00 sec = Mbps 7 retrans! nuttcp-t: MB in real seconds = KB/sec = Mbps! nuttcp-t: MB in 0.01 CPU seconds = KB/cpu sec! nuttcp-t: retrans = 58! nuttcp-t: 409 I/O calls, msec/call = 25.04, calls/sec = 40.90! nuttcp-t: 0.0user 0.0sys 0:10real 0% 0i+0d 768maxrss 0+2pf 51+3csw!! nuttcp-r: MB in real seconds = KB/sec = Mbps! nuttcp-r: MB in 0.02 CPU seconds = KB/cpu sec! nuttcp-r: 787 I/O calls, msec/call = 13.40, calls/sec = 76.44! nuttcp-r: 0.0user 0.0sys 0:10real 0% 0i+0d 770maxrss 0+4pf 382+0csw!! 6/2/15 73
74 Throughput Expecta*ons Q: What iperf through should you expect to see on a uncongested 10Gbps network? A: Gbps, depending on RTT TCP tuning CPU core speed, and ra*o of sender speed to receiver speed 6/2/15 74
75 OWAMP OWAMP = One Way Ac*ve Measurement Protocol E.g. one way ping Some differences from tradi*onal ping: Measure each direc*on independently (recall that we oken see things like conges*on occur in one direc*on and not the other) Uses small evenly spaced groupings of UDP (not ICMP) packets Ability to ramp up the interval of the stream, size of the packets, number of packets OWAMP is most useful for detec*ng packet train abnormali*es on an end to end basis Loss Duplica*on Out of order packets Latency on the forward vs. reverse path Number of Layer 3 hops Does require some accurate *me via NTP the perfsonar toolkit does take care of this for you. 6/2/15 75
76 What OWAMP Tells Us OWAMP is very useful in regular tes*ng Conges*on or queuing oken occurs in a single direc*on Packet loss informa*on (and how oken/how much occurs over *me) is more valuable than throughput This gives you a why to go with an observa*on. If your router is going to drop a 50B UDP packet, it is most certainly going to drop a 1500B/9000B TCP packet Overlaying data Compare your throughput results against your OWAMP do you see panerns? Alarm on each, if you are alarming (and we hope you are alarming ) 6/2/15 76
77 OWAMP (ini*al) > owping sunn-owamp.es.net! Approximately 12.6 seconds until results available! --- owping statistics from [wash-owamp.es.net]:8885 to [sunn-owamp.es.net]: ! SID:!c681fe4ed67f1b3e5faeb249f078ec8a! first:! t18:11:11.420! last:! t18:11:20.587! 100 sent, 0 lost (0.000%), 0 duplicates! one-way delay min/median/max = 31/31.1/31.7 ms, (err= ms)! one-way jitter = 0 ms (P95-P50)! Hops = 7 (consistently)! no reordering!! --- owping statistics from [sunn-owamp.es.net]:9027 to [wash-owamp.es.net]: ! SID:!c67cfc7ed67f1b3eaab69b94f393bc46! first:! t18:11:11.321! last:! t18:11:22.672! 100 sent, 0 lost (0.000%), 0 duplicates! one-way delay min/median/max = 31.4/31.5/32.6 ms, (err= ms)! one-way jitter = 0 ms (P95-P50)! Hops = 7 (consistently)! no reordering! This is what perfsonar Graphs the average of the complete test 6/2/15 77
78 OWAMP (w/ loss) > owping sunn-owamp.es.net! Approximately 12.6 seconds until results available!! --- owping statistics from [wash-owamp.es.net]:8852 to [sunn-owamp.es.net]: ! SID:!c681fe4ed67f1f c341a2b83f3! first:! t18:27:22.032! last:! t18:27:32.904! 100 sent, 12 lost (12.000%), 0 duplicates! one-way delay min/median/max = 31.1/31.1/31.3 ms, (err= ms)! one-way jitter = nan ms (P95-P50)! Hops = 7 (consistently)! no reordering!!! --- owping statistics from [sunn-owamp.es.net]:9182 to [wash-owamp.es.net]: ! SID:!c67cfc7ed67f1f09531c87cf38381bb6! first:! t18:27:21.993! last:! t18:27:33.785! 100 sent, 0 lost (0.000%), 0 duplicates! one-way delay min/median/max = 31.4/31.5/31.5 ms, (err= ms)! one-way jitter = 0 ms (P95-P50)! Hops = 7 (consistently)! no reordering! This is what perfsonar Graphs the average of the complete test 6/2/15 78
79 OWAMP (w/ re- ordering) > owping sunn-owamp.es.net! Approximately 12.9 seconds until results available!! --- owping statistics from [wash-owamp.es.net]:8814 to [sunn-owamp.es.net]: ! SID:!c681fe4ed67f21d94991ea335b7a1830! first:! t18:39:22.543! last:! t18:39:31.503! 100 sent, 0 lost (0.000%), 0 duplicates! one-way delay min/median/max = 31.1/106/106 ms, (err= ms)! one-way jitter = 0.1 ms (P95-P50)! Hops = 7 (consistently)! 1-reordering = %! 2-reordering = %! no 3-reordering!! --- owping statistics from [sunn-owamp.es.net]:8770 to [wash-owamp.es.net]: ! SID:!c67cfc7ed67f21d994c1302dff644543! first:! t18:39:22.602! last:! t18:39:31.279! 100 sent, 0 lost (0.000%), 0 duplicates! one-way delay min/median/max = 31.4/31.5/32 ms, (err= ms)! one-way jitter = 0 ms (P95-P50)! Hops = 7 (consistently)! no reordering! 6/2/15 79
80 Plo ng owamp results Add new owamp plot here 6/2/15 80
81 BWCTL (Traceroute) > bwtraceroute -T traceroute -s sacr-pt1.es.net c wash-pt1.es.net! bwtraceroute: Using tool: traceroute! traceroute to ( ), 30 hops max, 60 byte packets! 1 sacrcr5-sacrpt1.es.net ( ) ms ms ms! 2 denvcr5-ip-a-sacrcr5.es.net ( ) ms ms ms! 3 kanscr5-ip-a-denvcr5.es.net ( ) ms ms ms! 4 chiccr5-ip-a-kanscr5.es.net ( ) ms ms ms! 5 washcr5-ip-a-chiccr5.es.net ( ) ms ms ms! 6 wash-pt1.es.net ( ) ms ms ms!! > bwtraceroute -T traceroute -c sacr-pt1.es.net s wash-pt1.es.net! bwtraceroute: Using tool: traceroute! traceroute to ( ), 30 hops max, 60 byte packets! 1 wash-te-perf-if1.es.net ( ) ms ms ms! 2 chiccr5-ip-a-washcr5.es.net ( ) ms ms ms! 3 kanscr5-ip-a-chiccr5.es.net ( ) ms ms ms! 4 denvcr5-ip-a-kanscr5.es.net ( ) ms ms ms! 5 sacrcr5-ip-a-denvcr5.es.net ( ) ms ms ms! 6 sacr-pt1.es.net ( ) ms ms ms! 6/2/15 81
82 BWCTL (Tracepath) > bwtraceroute -T tracepath -s sacr-pt1.es.net c wash-pt1.es.net! bwtraceroute: Using tool: tracepath! bwtraceroute: 36 seconds until test results available!! SENDER START! 1?: [LOCALHOST] pmtu 9000! 1: sacrcr5-sacrpt1.es.net ( ) 0.489ms! 1: sacrcr5-sacrpt1.es.net ( ) 0.463ms! 2: denvcr5-ip-a-sacrcr5.es.net ( ) ms! 3: kanscr5-ip-a-denvcr5.es.net ( ) ms! 4: chiccr5-ip-a-kanscr5.es.net ( ) ms! 5: washcr5-ip-a-chiccr5.es.net ( ) ms! 6: wash-pt1.es.net ( ) ms reached! Resume: pmtu 9000 hops 6 back 59!! SENDER END! 6/2/15 82
83 BWCTL (Tracepath with MTU mismatch) > bwtraceroute - T tracepath - c nejest.lbl.gov - s anl- pt1.es.net bwtraceroute: Using tool: tracepath 1?: [LOCALHOST] pmtu : anlmr2- anlpt1.es.net ( ) 0.249ms asymm 2 1: anlmr2- anlpt1.es.net ( ) 0.197ms asymm 2 2: no reply 3: kanscr5- ip- a- chiccr5.es.net ( ) ms 4: denvcr5- ip- a- kanscr5.es.net ( ) ms 5: sacrcr5- ip- a- denvcr5.es.net ( ) ms 6: sunncr5- ip- a- sacrcr5.es.net ( ) ms 7: et er1- n1.lbl.gov ( ) ms 8: t5-4.ir1- n1.lbl.gov ( ) ms 9: t5-4.ir1- n1.lbl.gov ( ) ms pmtu : nejest.lbl.gov ( ) ms reached Resume: pmtu 1500 hops 9 back 56! 6/2/15 83
84 BWCTL (Ping) > bwping - c nejest.lbl.gov - s anl- pt1.es.net bwping: Using tool: ping PING ( ) from : 56(84) bytes of data. 64 bytes from : icmp_seq=1 Jl=56,me=49.1 ms 64 bytes from : icmp_seq=2 Jl=56,me=49.1 ms 64 bytes from : icmp_seq=3 Jl=56,me=49.1 ms 64 bytes from : icmp_seq=4 Jl=56,me=49.1 ms 64 bytes from : icmp_seq=5 Jl=56,me=49.1 ms To test to a host not running bwctl, use - E > bwping - E - c bwping: Using tool: ping PING 2607:f8b0:4010:800::1013(2607:f8b0:4010:800::1013) from 2001:400:2201:1190::3 : 56 data bytes 64 bytes from 2607:f8b0:4010:800::1013: icmp_seq=1 Jl=54,me=48.1 ms 64 bytes from 2607:f8b0:4010:800::1013: icmp_seq=2 Jl=54,me=48.2 ms 64 bytes from 2607:f8b0:4010:800::1013: icmp_seq=3 Jl=54,me=48.2 ms 64 bytes from 2607:f8b0:4010:800::1013: icmp_seq=4 Jl=54,me=48.2 ms 6/2/15 84
85 BWCTL (owamp) > bwping -T owamp -4 -s sacr-pt1.es.net c wash-pt1.es.net! bwping: Using tool: owamp! bwping: 42 seconds until test results available!! --- owping statistics from [ ]:5283 to [ ]: ! SID:!c67cee22d85fc3b2bbe23f83da5947b2! first:! t08:17:58.534! last:! t08:18:17.581! 10 sent, 0 lost (0.000%), 0 duplicates! one-way delay min/median/max = 29.9/29.9/29.9 ms, (err=0.191 ms)! one-way jitter = 0.1 ms (P95-P50)! Hops = 5 (consistently)! no reordering!! 6/2/15 85
86 Tes*ng Piƒalls Don t trust a single result you should run any test a few *mes to confirm the results Hop- by- hop path characteris*cs may be con*nuously changing A poor carpenter blames his tools The tools are only as good as the people using them, do it methodically Trust the results remember that they are giving you a number based on the en*re environment If the site isn t using perfsonar step 1 is to get them to do so hnp:// Get some help from the community perfsonar- [email protected] 6/2/15 86
87 Resources perfsonar website hnp:// perfsonar Toolkit Manual hnp://docs.perfsonar.net/ perfsonar mailing lists hnp:// ge ng- help/ perfsonar directory hnp://stats.es.net/ ServicesDirectory/ FasterData Knowledgebase hnp://fasterdata.es.net/ 6/2/15 87
88 EXTRA SLIDES 6/2/15 88
TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.
TCP Labs WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.2015 Hands-on session We ll explore practical aspects of TCP Checking the effect
EVALUATING NETWORK BUFFER SIZE REQUIREMENTS
EVALUATING NETWORK BUFFER SIZE REQUIREMENTS for Very Large Data Transfers Michael Smitasin Lawrence Berkeley National Laboratory (LBNL) Brian Tierney Energy Sciences Network (ESnet) [ 2 ] Example Workflow
Network Monitoring with the perfsonar Dashboard
Network Monitoring with the perfsonar Dashboard Andy Lake Brian Tierney ESnet Advanced Network Technologies Group TIP2013 Honolulu HI January 15, 2013 Overview perfsonar overview Dashboard history and
perfsonar: End-to-End Network Performance Verification
perfsonar: End-to-End Network Performance Verification Toby Wong Sr. Network Analyst, BCNET Ian Gable Technical Manager, Canada Overview 1. IntroducGons 2. Problem Statement/Example Scenario 3. Why perfsonar?
Iperf Tutorial. Jon Dugan <[email protected]> Summer JointTechs 2010, Columbus, OH
Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH Outline What are we measuring? TCP Measurements UDP Measurements Useful tricks Iperf Development What are we measuring? Throughput?
Tier3 Network Issues. Richard Carlson May 19, 2009 [email protected]
Tier3 Network Issues Richard Carlson May 19, 2009 [email protected] Internet2 overview Member organization with a national backbone infrastructure Campus & Regional network members National and International
Campus Network Design Science DMZ
Campus Network Design Science DMZ Dale Smith Network Startup Resource Center [email protected] The information in this document comes largely from work done by ESnet, the USA Energy Sciences Network see
Network performance monitoring Insight into perfsonar
Network performance monitoring Insight into perfsonar Szymon Trocha, Poznań Supercomputing and Networking Center E-infrastructure Autumn Workshops, Chisinau, Moldova 9 September 2014 Agenda! Network performance
Overview of Network Measurement Tools
Overview of Network Measurement Tools Jon M. Dugan Energy Sciences Network Lawrence Berkeley National Laboratory NANOG 43, Brooklyn, NY June 1, 2008 Networking for the Future of Science
Network Performance - Theory
Network Performance - Theory This document is a result of work by the perfsonar Project (h@p://www.perfsonar.net) and is licensed under CC BY- SA 4.0 (h@ps://creajvecommons.org/licenses/by- sa/4.0/). Event
Using Linux Traffic Control on Virtual Circuits J. Zurawski Internet2 [email protected] February 25 nd 2013
Using Linux Traffic Control on Virtual Circuits J. Zurawski Internet2 [email protected] February 25 nd 2013 1. Abstract Research and Education (R&E) networks have experimented with the concept of
End-to-End Network/Application Performance Troubleshooting Methodology
End-to-End Network/Application Performance Troubleshooting Methodology Wenji Wu, Andrey Bobyshev, Mark Bowden, Matt Crawford, Phil Demar, Vyto Grigaliunas, Maxim Grigoriev, Don Petravick Fermilab, P.O.
Master s Thesis Evaluation and implementation of a network performance measurement tool for analyzing network performance under heavy workloads
Master s Thesis Evaluation and implementation of a network performance measurement tool for analyzing network performance under heavy workloads Kuralamudhan Ramakrishnan Hamburg, 04 May 2015 Evaluation
Network Probe. Figure 1.1 Cacti Utilization Graph
Network Probe Description The MCNC Client Network Engineering group will install several open source network performance management tools on a computer provided by the LEA or charter school to build a
High-Speed TCP Performance Characterization under Various Operating Systems
High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,
Infrastructure for active and passive measurements at 10Gbps and beyond
Infrastructure for active and passive measurements at 10Gbps and beyond Best Practice Document Produced by UNINETT led working group on network monitoring (UFS 142) Author: Arne Øslebø August 2014 1 TERENA
Hands on Workshop. Network Performance Monitoring and Multicast Routing. Yasuichi Kitamura NICT Jin Tanaka KDDI/NICT APAN-JP NOC
Hands on Workshop Network Performance Monitoring and Multicast Routing Yasuichi Kitamura NICT Jin Tanaka KDDI/NICT APAN-JP NOC July 18th TEIN2 Site Coordination Workshop Network Performance Monitoring
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
Kaseya Fundamentals Workshop DAY THREE. Developed by Kaseya University. Powered by IT Scholars
Kaseya Fundamentals Workshop DAY THREE Developed by Kaseya University Powered by IT Scholars Kaseya Version 6.5 Last updated March, 2014 Day Two Overview Day Two Lab Review Patch Management Configura;on
Monitoring Android Apps using the logcat and iperf tools. 22 May 2015
Monitoring Android Apps using the logcat and iperf tools Michalis Katsarakis [email protected] Tutorial: HY-439 22 May 2015 http://www.csd.uoc.gr/~hy439/ Outline Introduction Monitoring the Android
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment
Using TrueSpeed VNF to Test TCP Throughput in a Call Center Environment TrueSpeed VNF provides network operators and enterprise users with repeatable, standards-based testing to resolve complaints about
Deploying distributed network monitoring mesh
Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites Phil DeMar, Maxim Grigoriev Fermilab Joe Metzger, Brian Tierney ESnet Martin Swany University of Delaware Jeff Boote, Eric
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
Mike Canney Principal Network Analyst getpackets.com
Mike Canney Principal Network Analyst getpackets.com 1 My contact info contact Mike Canney, Principal Network Analyst, getpackets.com [email protected] 319.389.1137 2 Capture Strategies capture Capture
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Lecture 8 Performance Measurements and Metrics. Performance Metrics. Outline. Performance Metrics. Performance Metrics Performance Measurements
Outline Lecture 8 Performance Measurements and Metrics Performance Metrics Performance Measurements Kurose-Ross: 1.2-1.4 (Hassan-Jain: Chapter 3 Performance Measurement of TCP/IP Networks ) 2010-02-17
Monitoring and analyzing audio, video, and multimedia traffic on the network
Monitoring and analyzing audio, video, and multimedia traffic on the network Slavko Gajin [email protected] AMRES Academic Network of Serbia AMRES Academic Network of Serbia RCUB - Belgrade University
Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade
Application Note Windows 2000/XP TCP Tuning for High Bandwidth Networks mguard smart mguard PCI mguard blade mguard industrial mguard delta Innominate Security Technologies AG Albert-Einstein-Str. 14 12489
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,
LHCONE Site Connections
LHCONE Site Connections Michael O Connor [email protected] ESnet Network Engineering Asia Tier Center Forum on Networking Daejeon, South Korea September 23, 2015 Outline Introduction ESnet LHCONE Traffic Volumes
Enterprise QoS. Tim Chung Google Corporate Netops Architecture Nanog 49 June 15th, 2010
Enterprise QoS Tim Chung Google Corporate Netops Architecture Nanog 49 June 15th, 2010 Agenda Challenges Solu5ons Opera5ons Best Prac5ces Note: This talk pertains to Google enterprise network only, not
Measure wireless network performance using testing tool iperf
Measure wireless network performance using testing tool iperf By Lisa Phifer, SearchNetworking.com Many companies are upgrading their wireless networks to 802.11n for better throughput, reach, and reliability,
Visualizations and Correlations in Troubleshooting
Visualizations and Correlations in Troubleshooting Kevin Burns Comcast [email protected] 1 Comcast Technology Groups Cable CMTS, Modem, Edge Services Backbone Transport, Routing Converged Regional
Question: 3 When using Application Intelligence, Server Time may be defined as.
1 Network General - 1T6-521 Application Performance Analysis and Troubleshooting Question: 1 One component in an application turn is. A. Server response time B. Network process time C. Application response
Performance Measurement of Wireless LAN Using Open Source
Performance Measurement of Wireless LAN Using Open Source Vipin M Wireless Communication Research Group AU KBC Research Centre http://comm.au-kbc.org/ 1 Overview General Network Why Network Performance
The Ecosystem of Computer Networks. Ripe 46 Amsterdam, The Netherlands
The Ecosystem of Computer Networks Ripe 46 Amsterdam, The Netherlands Silvia Veronese NetworkPhysics.com [email protected] September 2003 1 Agenda Today s IT challenges Introduction to Network
Challenges of Sending Large Files Over Public Internet
Challenges of Sending Large Files Over Public Internet CLICK TO EDIT MASTER TITLE STYLE JONATHAN SOLOMON SENIOR SALES & SYSTEM ENGINEER, ASPERA, INC. CLICK TO EDIT MASTER SUBTITLE STYLE OUTLINE Ø Setting
Deploying in a Distributed Environment
Deploying in a Distributed Environment Distributed enterprise networks have many remote locations, ranging from dozens to thousands of small offices. Typically, between 5 and 50 employees work at each
Perspec'ves on SDN. Roadmap to SDN Workshop, LBL
Perspec'ves on SDN Roadmap to SDN Workshop, LBL Philip Papadopoulos San Diego Supercomputer Center California Ins8tute for Telecommunica8ons and Informa8on Technology University of California, San Diego
Integration of Network Performance Monitoring Data at FTS3
Integration of Network Performance Monitoring Data at FTS3 July-August 2013 Author: Rocío Rama Ballesteros Supervisor(s): Michail Salichos Alejandro Álvarez CERN openlab Summer Student Report 2013 Project
Stream Deployments in the Real World: Enhance Opera?onal Intelligence Across Applica?on Delivery, IT Ops, Security, and More
Copyright 2015 Splunk Inc. Stream Deployments in the Real World: Enhance Opera?onal Intelligence Across Applica?on Delivery, IT Ops, Security, and More Stela Udovicic Sr. Product Marke?ng Manager Clayton
Defending Computer Networks Lecture 6: TCP and Scanning. Stuart Staniford Adjunct Professor of Computer Science
Defending Computer Networks Lecture 6: TCP and Scanning Stuart Staniford Adjunct Professor of Computer Science Logis;cs HW1 due tomorrow First quiz will be Tuesday September 23 rd. Half hour quiz at start
Privacy- Preserving P2P Data Sharing with OneSwarm. Presented by. Adnan Malik
Privacy- Preserving P2P Data Sharing with OneSwarm Presented by Adnan Malik Privacy The protec?on of informa?on from unauthorized disclosure Centraliza?on and privacy threat Websites Facebook TwiFer Peer
perfsonar Host Hardware
perfsonar Host Hardware Event Presenter, OrganizaKon, Email Date This document is a result of work by the perfsonar Project (h@p://www.perfsonar.net) and is licensed under CC BY- SA 4.0 (h@ps://creakvecommons.org/licenses/by-
ITL Lab 5 - Performance Measurements and SNMP Monitoring 1. Purpose
Lab 5 - Performance Measurements and SNMP Monitoring 1 Purpose Before the Lab Measure the performance (throughput) of TCP connections Measure the performance of UDP connections; observe an RTP flow Examine
TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE
TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE Original problem IT-DB was backuping some data to Wigner CC. High transfer rate was required in order to avoid service degradation. 4G out of... 10G
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
1000Mbps Ethernet Performance Test Report 2014.4
1000Mbps Ethernet Performance Test Report 2014.4 Test Setup: Test Equipment Used: Lenovo ThinkPad T420 Laptop Intel Core i5-2540m CPU - 2.60 GHz 4GB DDR3 Memory Intel 82579LM Gigabit Ethernet Adapter CentOS
Applications. Network Application Performance Analysis. Laboratory. Objective. Overview
Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying
Network Monitoring. Sebastian Büttrich, [email protected] NSRC / IT University of Copenhagen Last edit: February 2012, ICTP Trieste
Network Monitoring Sebastian Büttrich, [email protected] NSRC / IT University of Copenhagen Last edit: February 2012, ICTP Trieste http://creativecommons.org/licenses/by-nc-sa/3.0/ Agenda What is network
Wireless Networks: Network Protocols/Mobile IP
Wireless Networks: Network Protocols/Mobile IP Mo$va$on Data transfer Encapsula$on Security IPv6 Problems DHCP Adapted from J. Schiller, Mobile Communications 1 Mo$va$on for Mobile IP Rou$ng based on IP
Network Management and Debugging. Jing Zhou
Network Management and Debugging Jing Zhou Network Management and Debugging Network management generally includes following task: Fault detection for networks, gateways and critical servers Schemes for
Quick Start for Network Agent. 5-Step Quick Start. What is Network Agent?
What is Network Agent? The Websense Network Agent software component uses sniffer technology to monitor all of the internet traffic on the network machines that you assign to it. Network Agent filters
Using IPM to Measure Network Performance
CHAPTER 3 Using IPM to Measure Network Performance This chapter provides details on using IPM to measure latency, jitter, availability, packet loss, and errors. It includes the following sections: Measuring
Choosing Tap or SPAN for Data Center Monitoring
Choosing Tap or SPAN for Data Center Monitoring Technical Brief Key Points Taps are passive, silent, and deliver a perfect record of link traffic, but require additional hardware and create a point of
Internet Infrastructure Measurement: Challenges and Tools
Internet Infrastructure Measurement: Challenges and Tools Internet Infrastructure Measurement: Challenges and Tools Outline Motivation Challenges Tools Conclusion Why Measure? Why Measure? Internet, with
Network Agent Quick Start
Network Agent Quick Start Topic 50500 Network Agent Quick Start Updated 17-Sep-2013 Applies To: Web Filter, Web Security, Web Security Gateway, and Web Security Gateway Anywhere, v7.7 and 7.8 Websense
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Procedure: You can find the problem sheet on Drive D: of the lab PCs. 1. IP address for this host computer 2. Subnet mask 3. Default gateway address
Objectives University of Jordan Faculty of Engineering & Technology Computer Engineering Department Computer Networks Laboratory 907528 Lab.4 Basic Network Operation and Troubleshooting 1. To become familiar
A Summary of Network Traffic Monitoring and Analysis Techniques
http://www.cse.wustl.edu/~jain/cse567-06/ftp/net_monitoring/index.html 1 of 9 A Summary of Network Traffic Monitoring and Analysis Techniques Alisha Cecil, [email protected] Abstract As company intranets
perfsonar MDM updates for LHCONE: VRF monitoring, updated web UI, VM images
perfsonar MDM updates for LHCONE: VRF monitoring, updated web UI, VM images Domenico Vicinanza DANTE, Cambridge, UK perfsonar MDM Product Manager [email protected] LHCONE Meeting Oslo 20-21
Network Management and Monitoring Software
Page 1 of 7 Network Management and Monitoring Software Many products on the market today provide analytical information to those who are responsible for the management of networked systems or what the
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308 Laura Knapp WW Business Consultant [email protected] Applied Expert Systems, Inc. 2011 1 Background
Network testing with iperf
Network testing with iperf Bill Farrow [email protected] Why use iperf? What does it run on? TCP example UDP example Use Case: Network Server Appliance Use Case: Embedded Video Streaming Use Case:
How To Connect To Bloomerg.Com With A Network Card From A Powerline To A Powerpoint Terminal On A Microsoft Powerbook (Powerline) On A Blackberry Or Ipnet (Powerbook) On An Ipnet Box On
Transport and Security Specification 15 July 2015 Version: 5.9 Contents Overview 3 Standard network requirements 3 Source and Destination Ports 3 Configuring the Connection Wizard 4 Private Bloomberg Network
DDC Sequencing and Redundancy
DDC Sequencing and Redundancy Presenter Sequencing Importance of sequencing Essen%al piece to designing and delivering a successful project Defines how disparate components interact to make up a system
TCP Tuning Techniques for High-Speed Wide-Area Networks. Wizard Gap
NFNN2, 20th-21st June 2005 National e-science Centre, Edinburgh TCP Tuning Techniques for High-Speed Wide-Area Networks Distributed Systems Department Lawrence Berkeley National Laboratory http://gridmon.dl.ac.uk/nfnn/
DHCP, ICMP, IPv6. Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley DHCP. DHCP UDP IP Eth Phy
, ICMP, IPv6 UDP IP Eth Phy UDP IP Eth Phy Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley Some materials copyright 1996-2012 J.F Kurose and K.W. Ross, All Rights
Troubleshooting Tools
Troubleshooting Tools An overview of the main tools for verifying network operation from a host Fulvio Risso Mario Baldi Politecnico di Torino (Technical University of Turin) see page 2 Notes n The commands/programs
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
The Case Against Jumbo Frames. Richard A Steenbergen <[email protected]> GTT Communications, Inc.
The Case Against Jumbo Frames Richard A Steenbergen GTT Communications, Inc. 1 What s This All About? What the heck is a Jumbo Frame? Technically, the IEEE 802.3 Ethernet standard defines
Distributed Network Monitoring. netbeez.net Booth #2344
Distributed Network Monitoring Booth #2344 Network Blindness Do users have network connectivity? Is the network degrading application performance? The network monitoring stack Active end- to- end tests
Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON. Ernie Gilman IBM. August 10, 2011: 1:30 PM-2:30 PM.
Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Ernie Gilman IBM August 10, 2011: 1:30 PM-2:30 PM Session 9917 Agenda Overview of OMEGAMON for Mainframe Networks FP3 and z/os 1.12 1.
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
LMS. OSI Layers and the Learning Management System. Over view
Over view A Learning is an applica7on located on a local network or the Internet, developed for the employment of electronic educa7onal technology by students across distances from a building with mul7ple
CCT vs. CCENT Skill Set Comparison
Operation of IP Data Networks Recognize the purpose and functions of various network devices such as Routers, Switches, Bridges and Hubs Select the components required to meet a given network specification
