1000Mbps Ethernet Performance Test Report 2014.4



Similar documents
Measuring Wireless Network Performance: Data Rates vs. Signal Strength

Performance of Host Identity Protocol on Nokia Internet Tablet

Network testing with iperf

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU

COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service

TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux perfsonar developer

Frequently Asked Questions

Iperf Tutorial. Jon Dugan Summer JointTechs 2010, Columbus, OH

Transparent Optimization of Grid Server Selection with Real-Time Passive Network Measurements. Marcia Zangrilli and Bruce Lowekamp

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser

Measure wireless network performance using testing tool iperf

D1.2 Network Load Balancing

VMWARE WHITE PAPER 1

Adaptec: Snap Server NAS Performance Study

The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB _v02

Network performance in virtual infrastructures

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

Comparing the Network Performance of Windows File Sharing Environments

Performance Measurement of Wireless LAN Using Open Source

EVALUATING NETWORK BUFFER SIZE REQUIREMENTS

CT LANforge WiFIRE Chromebook a/b/g/n WiFi Traffic Generator with 128 Virtual STA Interfaces

CT LANforge-FIRE VoIP Call Generator

Network Performance Evaluation of Latest Windows Operating Systems

Networking Virtualization Using FPGAs

Integrity of In-memory Data Mirroring in Distributed Systems Tejas Wanjari EMC Data Domain

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

Quantifying TCP Performance for IPv6 in Linux- Based Server Operating Systems

High-Speed TCP Performance Characterization under Various Operating Systems

Introduction to PCI Express Positioning Information

DOCUMENT REFERENCE: SQ EN. SAMKNOWS TEST METHODOLOGY Web-based Broadband Performance White Paper. July 2015

Monitoring Android Apps using the logcat and iperf tools. 22 May 2015

PCI Express* Ethernet Networking

DNA. White Paper. DNA White paper Version: 1.08 Release Date: 1 st July, 2015 Expiry Date: 31 st December, Ian Silvester DNA Manager.

Linux NIC and iscsi Performance over 40GbE

Question: 3 When using Application Intelligence, Server Time may be defined as.

ITL Lab 5 - Performance Measurements and SNMP Monitoring 1. Purpose

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking

Gigabit Ethernet Design

Figure 1: RotemNet Main Screen

perfsonar: End-to-End Network Performance Verification

Server must be connected to the internet via high-speed internet connection.

Gigabit Ethernet Packet Capture. User s Guide

4 Delivers over 20,000 SSL connections per second (cps), which

VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing

Cloud&backed,Internet,of,Things, Security,and,Dependability,

Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case.

Boosting Data Transfer with TCP Offload Engine Technology

HyperIP : VERITAS Replication Application Note

Introduction Page 2. Understanding Bandwidth Units Page 3. Internet Bandwidth V/s Download Speed Page 4. Optimum Utilization of Bandwidth Page 8

Considerations for a successful demo and implementation of Simplicity Video

Hands on Workshop. Network Performance Monitoring and Multicast Routing. Yasuichi Kitamura NICT Jin Tanaka KDDI/NICT APAN-JP NOC

S b at 1.6 Gigabit/Second Bandwidth Encourages Industrial Imaging and Instrumentation Applications Growth

- An Essential Building Block for Stable and Reliable Compute Clusters

Tier3 Network Issues. Richard Carlson May 19, 2009

Adaptec: Snap Server NAS Performance Study

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Introduction to PsPing - a new Microsoft Windows tool for measuring network performance. Bartek Gajda gajda@man.poznan.pl

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Mike Canney Principal Network Analyst getpackets.com

WanVelocity. WAN Optimization & Acceleration

UDP-IPv6 Performance in Peer-to-Peer Gigabit Ethernet using Modern Windows and Linux Systems

TamoSoft Throughput Test

Benchmarking the Performance of XenDesktop Virtual DeskTop Infrastructure (VDI) Platform

Comtrol Corporation: Latency Performance Testing of Network Device Servers

Determining Overhead, Variance & Isola>on Metrics in Virtualiza>on for IaaS Cloud

RDMA over Ethernet - A Preliminary Study

Getting the most TCP/IP from your Embedded Processor

AGIPD Interface Electronic Prototyping

Seeking Opportunities for Hardware Acceleration in Big Data Analytics

LCMON Network Traffic Analysis

Network Performance: Networks must be fast. What are the essential network performance metrics: bandwidth and latency

Exploration of Large Scale Virtual Networks. Open Network Summit 2016

Comparison of Novell, Polyserve, and Microsoft s Clustering Performance

Ten top problems network techs encounter

2013 Thomas Skybakmoen, Francisco Artes, Bob Walder, Ryan Liles

Big Data Technologies for Ultra-High-Speed Data Transfer and Processing

Performance Evaluation of Linux Bridge

The Bus (PCI and PCI-Express)

High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

Windows 8 SMB 2.2 File Sharing Performance

SIDN Server Measurements

Strategies. Addressing and Routing

The Fundamentals of Intrusion Prevention System Testing

EXAMPLES AND PROBLEMS. Competence Based Education Internet Protocols

Discussion Paper Category 6 vs Category 5e Cabling Systems and Implications for Voice over IP Networks

Transcription:

1000Mbps Ethernet Performance Test Report 2014.4

Test Setup: Test Equipment Used: Lenovo ThinkPad T420 Laptop Intel Core i5-2540m CPU - 2.60 GHz 4GB DDR3 Memory Intel 82579LM Gigabit Ethernet Adapter CentOS 7.0 64-bit Desktop Stock ZedBoard Rev. D Development Kit Category 5e Ethernet patch cable Test Details: The network adapter of the test PC is set to 1Gbps Full Duplex setting Disclaimer: The results of this experiment are provided for reference/educational purposes only and may not reflect results observed with other test equipment There are a number of factors which can impact network performance and throughput in addition to transmission overheads, including latency, receive window size, and system limitations such that the calculated throughput typically does not reflect the maximum achievable throughput If several systems are communicating simultaneously, the throughput between any pair of nodes can be substantially lower than nominal network bandwidth

Test Case 1: Roundtrip Ping Latency Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to ping a host PC for 100 iterations. http://www.wiki.xilinx.com/zynq+2014.4+release Ethernet linked at 1000Mbps. Test Details: The following command was used to test ZedBoard: # ping 192.168.1.100 -q -c 100

Test Case 1: Roundtrip Ping Latency Test Results: No amount of packet loss observed Average package round-trip 0.320ms to 0.441ms delay Results Test Board Results Control Board

Test Case 2: Normal TCP Throughput Test Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server The following command was used for each ZedBoard TCP throughput test: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m --time 60

Test Case 2: Normal TCP Throughput Test Test Results: Sustained throughput rates of 685-715 Mbits/sec to server Results Test Board Results Control Board

Test Case 3: Normal TCP Throughput Parallel Stream Test Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server The following command was used on each test ZedBoard to launch 2 parallel TCP throughput test streams: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m --parallel 2 --time 60

Test Case 3: Normal TCP Throughput Parallel Stream Test Test Results: Sustained throughput rates of 659-672 Mbits/sec to server Results Test Board Results Control Board

Test Case 4: Normal UDP Throughput Test Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server The following command was used on each test ZedBoard to launch UDP bandwidth 1000mbps tests: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m --time 60 --udp

Test Case 4: Normal UDP Throughput Test Test Results: Sustained throughput rates of 534-544 Mbits/sec to server. Results Test Board Results Control Board

Test Case 5: Zerocopy TCP Throughput Test Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server The following command was used for each ZedBoard TCP throughput test: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m --time 60 --zerocopy

Test Case 5: Zerocopy TCP Throughput Test Test Results: Sustained throughput rates of 795-810 Mbits/sec to server Results Test Board Results Control Board

Test Case 6: Zerocopy UDP Throughput Test Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server The following command was used on each test ZedBoard to launch UDP bandwidth 1000mbps tests: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m --time 60 --udp --zerocopy

Test Case 6: Zerocopy UDP Throughput Test Test Results: Sustained throughput rates of 533-535 Mbits/sec to server. Results Test Board Results Control Board

Test Case 7: Type of Service TCP and UDP Throughput Tests Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server Several iperf3 commands were used on a ZedBoard in order to determine how the Type of Service field can affect throughput, here is one example: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m --time 60 -tos 0x02

Test Case 7: Type of Service TCP and UDP Throughput Tests Test Results: Little to no performance differences noted for various Type of Service types used during tests. 800 Type of Service (ToS Field) vs. Throughput (Mbits/second) 700 600 500 400 TCP Throughput (Mbits/second) UDP Throughput (Mbits/second) 300 200 100 0 Low Cost -0x10 Low Delay - 0x08 Reliability - 0x04 Throughput - 0x02

Test Case 8: TCP Packet Length Throughput Tests Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server Several iperf3 commands were used on a ZedBoard in order to determine how the TCP packet length field can affect throughput, here is one example: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m -length 1K --time 60

Test Case 8: TCP Packet Length Throughput Tests Test Results: Performance differences noted for various TCP packet lengths used during tests. 1000 TCP Packet Length (KBytes) vs. Throughput (Mbits/second) 900 800 700 600 500 Throughput (Mbits/second) 400 300 200 100 0 1 2 4 8 16 32 64 128 256 512 1024

Test Case 9: UDP Packet Length Throughput Tests Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server Several iperf3 commands were used on a ZedBoard in order to determine how the UDP packet length field can affect throughput, here is one example: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m -length 32 --time 60 --udp

Test Case 9: UDP Packet Length Throughput Tests Test Results: Performance differences noted for various UDP packet lengths used during tests. 700 UDP Packet Length (Bytes) vs. Throughput (Mbits/second) 600 500 400 300 Throughput (Mbits/second) 200 100 0 32 64 128 256 512 1024 2048 4096 8192 16384 32768

Test Case 10: TCP Window Size Throughput Tests Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server Several iperf3 commands were used on a ZedBoard in order to determine how the TCP window size field can affect throughput, here is one example: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m --time 60 -window 32K

Test Case 10: TCP Window Size Throughput Tests Test Results: Performance differences noted for various TCP window sizes used during tests. 710 TCP Window Size (KBytes) vs. Throughput (Mbits/second) 700 690 680 670 660 Throughput (Mbits/second) 650 640 630 620 32 64 128 256 512 1024 2048 4096 8192 16384 32768

Test Case 11: UDP Window Size Throughput Tests Test Overview: Xilinx Zynq 2014.4 Release is used on ZedBoard targets to throughput performance test the network connection to a host PC for 60 seconds http://www.wiki.xilinx.com/zynq+2014.4+release The application iperf3 v3.0.9 compiled for Zynq to perform network throughput performance testing Ethernet linked at 1000Mbps Test Details: The iperf3 v3.0.9 compiled for x86 target using gcc compiler from github source repository https://github.com/esnet/iperf.git Used for iperf3 v3.0.9 server on CentOS 7.0 host PC: $./iperf3 --server Several iperf3 commands were used on a ZedBoard in order to determine how the UDP window size field can affect throughput, here is one example: #./iperf3 --bandwidth 1000M --client 192.168.1.100 \ --format m --time 60 -udp -window 32K

Test Case 10: UDP Window Size Throughput Tests Test Results: Performance differences noted for various UDP window sizes used during tests. 545 UDP Window Size (KBytes) vs. Throughput (Mbits/second) 540 535 530 Throughput (Mbits/second) 525 520 515 32 64 128 256 512 1024 2048 4096 8192 16384 32768