10GE NIC throughput measurements (Revised: ) S.Esenov
|
|
|
- Martin Porter
- 10 years ago
- Views:
Transcription
1 1 10GE NIC throughput measurements (Revised: ) S.Esenov 1 Introduction Terminology Factors affecting throughput Host PCIe, memory and CPU connectivity Front side bus Interrupts Memory PCIe NIC hardware Kernel Multiple cores Driver software and vendor tuning suggestions System and user parameter settings Non transfer related activity on the host Host hardware and software summary NIC hardware are software summary Test setup Starting point settings and performance measurements Conclusions Follow up work and open questions Glossary Parameters Performance testing tools used Netperf Iperf Xperf Monitoring tools used Mpstat Interrupt inspection Setting interrupt affinity Other monitoring tools Configuration tools sysctl ifconfig ethtool Other configuration tools Packet handling References... 18
2 2 1 Introduction The original, naive, aim was to quickly determine the best TCP and UDP throughput rates between machines hosting the various NIC boards purchased. The immediate requirement for this work was to provide a PC based readout system for HPAD (AGIPD) detector which attaches via a UDP 10GE fibre connection to the front end electronics readout FPGA. This connection is required to run at the highest possible throughput rate without packet loss. In reality a significant period of time was spent: understanding what factors affect the throughput, what versions of kernels and drivers have to be used with each other, creating and compileing drivers, identifying and modifying (tuning) system and user parameters, finding the tools to monitor and modify parameters, writing additional monitoring tools (xperf), etc. The result of this work, a snapshot of our understanding of how to tune and determine the best throughput, is described in this note. It should be considered as a starting point for a later, more complete, attempt to understand throughput. 2 Terminology Note that the throughput numbers quoted in the text are decimal, thus 1GB is 1,000,000,000 bytes and not 1,073,741,824 bytes. This convention has been chosen because it is used by most of the performance monitoring tools used. 3 Factors affecting throughput The following sections describe hardware and software details which are likely to affect the throughput. 3.1 Host PCIe, memory and CPU connectivity The main boards of the workstations used during NIC testing are based on the Intel 5400 chipset. The main board layout is shown schematically in Figure 1.
3 3 Figure 1 Intel 5400 chipset block diagram The next paragraphs give brief descriptions of the FSB, Interrupt, memory and PCIe sub systems of the MCH, the text is paraphrased or copied directly from the Intel 5400 Chipset Memory Controller Hub ( ) Front side bus The 5400 chipset is designed for use with two (P1 & P2) dual and quad core processors. It consists of two principle components, a Memory Controller Hub (MCH) for the host bridge and the I/O controller hub. Each processor socket is connected to the MCH via a separate Front Side Bus (FSB) which supports 1066 MTS, 1333 MTS and 1600 Mega Transactions per Second (MTS), which is a quad-pumped bus running off a 266/333/400 MHz clock, and a point to point DIB processor system bus interface. Each processor FSB supports peak address generation rates of 533/667/800 Million Addresses/second. Both FSB data buses are quad pumped 64-bits which allows peak bandwidths inbound of 8 GB/s (1066 MTS), 10 GB/s (1333 MTS) and 12 GB/s (1600 MTS) and out bound of 17 GB/s, 21 GB/s and 25GB/s, respectively. The MCH supports 38-bit host addressing, decoding up to 128 GB of the processor s memory address space. Host-initiated I/O cycles are decoded to PCI, PCI Express, ESI interface or MCH configuration space. Host-initiated memory cycles are decoded to PCI, PCI Express,
4 4 ESI or system memory. A snoop filter eliminates, data cache coherence, traffic on the snooped front side bus of the processor being snooped. The host CPUs used in the test setup, Table 1, support the 1333 MTS FSB clock and thus have a 10GB/s in and 21 GB/s out bandwidth Interrupts Interrupts are also delivered via the FSB. The legacy APIC serial bus interrupt delivery mechanism is not supported. Interrupt-related messages are encoded on the FSB as Interrupt Message Transactions. In the Intel 5400 chipset platform, FSB interrupts may originate from the processor on the system bus, or from a downstream device on the Enterprise South Bridge Interface (ESI). In the later case, the MCH drives the Interrupt Message Transaction onto the system bus. In the Intel 5400 chipset the Intel 631xESB/632xESB I/O Controller Hub contains I/OxAPICs, and its interrupts are generated as upstream ESI memory writes. Furthermore, PCI 2.3 defines Message Signaled Interrupts (MSI) that are also in the form of memory writes. A PCI 2.3 device may generate an interrupt as an MSI cycle on its PCI bus instead of asserting a hardware signal to the IOxAPIC. The MSI may be directed to the IOxAPIC which in turn generates an interrupt as an upstream ESI memory write. Alternatively, the MSI may be directed directly to the FSB. The target of an MSI is dependent on the address of the interrupt memory write. The MCH forwards inbound ESI and PCI (PCI semantic only) memory writes to address 0FEEx_xxxxh to the FSB as Interrupt Message Transactions Memory Fully Buffered DIMM technology was developed to address the higher performance needs of server and workstation platforms. FB-DIMM addresses the dual needs for higher bandwidth and larger memory sizes. FB-DIMM memory DIMMs contain an Advanced Memory Buffer (AMB) device that serves as an interface between the point to point FB-DIMM Channel links and the DDR2 DRAM devices. Each AMB is capable of buffering up to two ranks of DRAM devices. Each AMB supports two complete FB-DIMM channel interfaces. The first FB-DIMM interface is the incoming interface between the AMB and its proceeding device. The second interface is the outgoing interface and is between the AMB and its succeeding device. The point-topoint FB-DIMM links are terminated by the last AMB in a chain. The outgoing interface of the last AMB requires no external termination. The four FB-DIMM channels are organized into two branches of two channels per branch. Each branch is supported by a separate Memory Controller (MC). The two channels on each branch operate in lock step to increase FB-DIMM bandwidth. A branch transfers 16 bytes of payload/frame on Southbound lanes and 32 bytes of payload/frame on Northbound lanes. The key features of the FB-DIMM memory interface are summarized in the following list.
5 5 Four Fully Buffered DDR (FB-DIMM) memory channels. Branch channels are paired together in lock step to match FSB bandwidth requirement. Each FB-DIMM Channel can link up to four Fully Buffered - DDR2 DIMMs (FBDIMM1). Supports up to 16 dual-ranked FB-DDR2 8GB DIMMs, i.e. 128 GB of physical memory The FB-DIMM link speed is at 6x the DDR data transfer speed. A 3.2 GHz FB- DIMM link supports DDR2-533 (FSB@1067 MT/s). A 4.0 GHz FB-DIMM link Supports DDR2-667 (FSB@1333 MT/s) and A 4.8 GHz FB-DIMM link Supports DDR2-800 (FSB@1600 MT/s) The MCH will comply with the FB-DIMM specification definition of a host and will be compatible with any FB-DIMM-compliant DIMM. Special single channel, single DIMM operation mode (Branch 0, Channel 0, Slot 0 position only). All memory devices must be DDR2. The memory used in the test setup, Table 1, is DDR2-800 clock which could drive 12 and 25 GB/s in and out of a 1600 FSB. The 1333 FSB used lowers these rates to 10 and 21 GB/s, respectively. We are not certain about how the number of DIMM rows affects these numbers! PCIe The PCI Express port 1 through port 8 are general purpose x4 PCI Express ports that may be used to connect to PCI Express devices. These x4 ports may be combined into high performance x8 PCI Express ports or up to two optimized high performance x16 graphics interface ports as well. The x16 ports contain several architectural enhancements to improve graphics performance. Port 1and Port 2, Port 3 and Port 4, Port 5 and Port 6, Port 7 and Port 8 are combinable to form single x8 ports. Ports 1-4 and ports 5-8 are combinable to form the two x16 ports. The Intel 5400 chipset MCH supports up to four high performance x8 ports at on GEN 1 speed or up to two high performance x16 graphics PCI Express ports at revision 2.0 speed. This port contains several architectural enhancements to increase graphics performance. The NICs used in the test setup, Table 2, both use x8 lanes corresponding to a bandwidth of 2 GB/s. 3.2 NIC hardware It should be expected that good performance is obtained using hardware which is capable of performing the required task.
6 6 The NIC boards chosen implement data transfer over PCIe using x8 lanes, i.e. 2GB/s theoretical bandwidth. In the near future PCIe 3.0 higher rate and x16 lane implementations should further improve the maximum bandwidth. The TCP Offloading Engine (TOE) of the Chelsio has not been used in the measurements reported here. There are two camps of opinion concerning TOE, supporters (e.g. Chelsio) who envisage offloading TCP stack handling from CPU cores, and those (e.g. Intel) who foreseen the work of stack handling, etc., remaining with dedicated CPU core. 3.3 Kernel Modern Linux kernels ( 2.6) support various realtime capabilities. It is sometimes necessary to deterministically meet scheduling deadlines with specific tolerances as well as being fast in terms of program execution. In this respect non-realtime kernels have higher latencies, but often performing better than realtime kernels w.r.t. program execution. A realtime kernel has been used whilst making the measurements presented here; it is believed that this should reduce UDP protocol packet loss because of the lower response latencies associated with the realtime system. There are various realtime Linux alternatives: from hard realtime to soft realtime solutions, see Ref 1. Here we use the realtime configuration ( preemptive realtime ) of the Linux kernel ( rt) from Novell opensuse 11.0 distribution. What is important is that this kernel is supported by the driver software of NIC vendors. 3.4 Multiple cores The availability of multiple CPU sockets and multiple cores per socket increases the configuration space for performing tests; which cores should be used on which sockets. In the measurements made here we have always configured the interrupt handling core and the core used for program execution to be on the same CPU (socket), i.e. to be on sister cores. This is generally accepted to produce the best performance. 3.5 Driver software and vendor tuning suggestions Vendor notes are important for determining: the latest driver software version, which kernels are supported (have been targeted during vendor tests), and what advice exists for tuning performance. We used the driver cxgb3toe (TOE/NIC) for the NIC board from Chelsio Communications. It is supposed to work with Linux kernels till versions xx. It was successfully compiled for our opensuse realtime kernel (see above). We also have followed the vendor guidelines found in the driver software for tunning the performance of the system. (perftune.sh script). The ixgbe driver for Intel NIC board came with opensuse distribution. The tuning guidelines applied were found in /usr/src/linux/documentation/networking/ixgb.txt
7 7 3.6 System and user parameter settings What are the various parameters, what is their significance? There are a lot of parameters that could/should influence the network performances measured in our tests. Some of them were found to be the main contributors into the final performance. Maximum and default OS send/receive buffer sizes for all types of connections: net.core.wmem_max, net.core.rmem_max, net.core.wmem_default, net.core.rmem_default. Minimum, default and maximum TCP send/receive buffer sizes: net.ipv4.tcp_mem, net.ipv4.tcp_wmem, net.ipv4.tcp_rmem Queue size for pending UDP packages before dropping them: net.core.netdev_max_backlog MTU size. The maximum value depends on NIC board: 9600 for Chelsio NIC and for Intel NIC. The more value was set the better throughput were reached. Interrupt handler affinity. Assignment the interrupt handler to run on a particular core in multiprocessor system to avoid the overhead with the context switching. Taskset affinity. As above but the task itself to be assigned to a particular core. Parameters which are not mentioned above or referred to in Table 3 have not been changed from there default values set during driver and system initialization. Uncertainties associated with the values of unseen parameters and possible changes to them remain a significant problem when trying to understand throughput. 3.7 Non transfer related activity on the host The tests performed here are data transfers between a user process, client, on one host and another user process, server, on a different host. As the hosts used were dedicated to the test being made no other processing load was present. This is not going to be the case in a normal user environment. As an estimate of the effect of additional loading, the dd if=/dev/sda of=/dev/null command was performed in parallel with our tests (dd produces a lot of disk lookups). No visible degradation in the network performance was seen we assume that this is due to the use of free cores and the high performance of the MCH subsystem. 4 Host hardware and software summary
8 8 Two Identical host machines were used during the NIC tests. The relevant hardware configuration details are summarized in Table 1. Compone nt Main board Attribute Type CPU sockets Front Side Bus (FSB) Chipset Type Intel D5400XS Skulltrail S771 E54000 EATX 2 x XEON or Core2Extreme LGA / 1333 / 1066 GHz E5400 CPU Type Intel XEON E GHz FSB 1333 L2 cache / speed 2 x 6MB / 2.5 GHz Memory Type 2 x RAM FB-DIMM 8GB-800MHz Kingston KVR800D2D 4F5K2 PCIe Protocol / Lanes v 2.3? / 9 x4 configurable as x16, x8, and x4. Table 1 Host configuration An opensuse 64bit operating system, # cat /etc/suse-release opensuse 11.0 (X86-64), with the rt realtime kernel, # uname a Linux hostname rt #1 SMP PREEMPT x86_64 bit GNU/Linux, was used during the measurements described here. 5 NIC hardware are software summary Two NIC boards have been purchased and used in the measurements performed: 1. Intel 10 Gigabit XF SR Server Adapter (product nr. EXPX9501AFXSR, see f ) 2. Chelsio S310E-SR+ (see ). In this note the vendor name will be used to identify which board is being discussed; Intel when referring to the EXPX9501AFXSR and Chelsio for the S310E-SR+. The properties of the NIC cards are summarized in the Table 2.
9 9 Property Intel Chelsio PCIe protocol v 2.0 v 1.1 PCIe lanes x 8 x 8 TCP Offload Engine No Yes (can be disabled) Transceiver from factor SFP SFP+ Fibre / connections MMF / LC MMF / LC Max. MTU (Jumbo) Bytes 9600 Bytes Table 2 NIC properties The NIC drivers used during the tests were the Intel 10 Ggabit PCI Network Driver version , # grep Intel /var/log/boot.msg grep 10 <6>ixgbe: Intel(R) 10 Ggabit PCI Network Driver version <6>ixgbe 0000:07:00.0: Intel 10 Gigabit Network Connection done eth0 device: Intel Coroporation 82598EB 10 Gigabit AF Network Connection (rev 01), and the Chelsio T3 Network Driver version , # grep Chelsio /var/log/boot.msg <6>Chelsio T3 Network Driver version <6>eth2: Chelsio T310 10GBASE-R RNIC (rev 4) PCI Express x8 MSI-X 6 Test setup The test setup used is show in Figure 2.The 10GE network cards were allocated ip address on the hidden subnet x and physically connected back-to-back to each other as required using short fibre cables. The 1GE connections were used for host access during testing. Figure 2 Test system network connections
10 10 7 Starting point settings and performance measurements In view of the large choice of parameter available, the baseline settings summarized in Table 3 have been used for initial performance measurements. Note that IPV4 routing has been used throughout. Parameter value comment tcp_window_scaling 1 i.e. enabled net.ipv4.tcp_sack 0 No visible influence net.ipv4.tcp_mem / / Default net.ipv4.tcp_wmem 4096 / / net.ipv4.tcp_rmem 4096 / / net.core.wmem_max net.core.rmem_max net.core.wmem_default net.core.rmem.default net.core.netdev_max_bac klog Crucial for UDP packet loss net.core.optmem_max For scatter/gather interface tcp_nodelay off tcp_sndbuffer 256k (= msg len) tcp_rcvbuffer 256k (= msg len) TOE disabled Chelsio only parameter MTU 9600 Jumbo packet size Irq_balancing 0 disabled Interrupt core 6 Starting from 0 Send core 4 Recv core 5 Table 3 Starting point parameter settings The network performance measured with the baseline settings are summarized in Table 4. The TCP and UDP throughputs are remarkably similar, ~7.5Gb/s, and independent of performance testing tool, NIC used and transmission direction. Differences between the two NICs are apparent when the interrupt rates are compared, the Chelsio system appears to generate a higher interrupt rate.
11 11 Protoc ol TCP UDP What Chelsio to Intel Intel to Chelsio Chelsio to Chelsio xperf/netperf Intel to Intel bw (Gb/s) / / prog core % Interrupts/ s /70 100/ k 14k 12k 91k 30/52k 94/43k 12k 14k bw (Gb/s) Packet loss prog core % Interrupts/ s k 8.7k 4k 81k 3k 80k 4k 9k Table 4 Baseline setting network performance (xperf runs on same core as interrupts), 43 = sender, 44 = receiver. Some of the measurements appear to be limited by the core running the testing tool reaching 100% cpu usage. It was not possible to increase the throughput significantly (>5%) by running two sets of senders and receivers on different cores, the cpu usage simply reduced to 50% on the two cores used. With the settings used no packet loss was observed using the UDP protocol. Packet loss is most likely minimized by using a real time kernel and setting the backlog parameter to a large value. The packet loss rate as a function of backlog size for Intel-to-Intel transmission is shown in XXX. 8 Conclusions Sustained TCP and UDP throughputs of ~7.5Gb/s (940MB/s) have been measured with both NIC boards tested on the host hardware used. No UDP packet loss was seen, which we attribute to the use of a real time kernel and the correct setting baseline parameters like the backlog. The cpu usage on the cores running the test programs is seen to reach 100%, but cannot be identified currently as a bottleneck. The documented PCIe, FBDIMM and FSB bandwidths are significantly higher than the rates we think are reached during testing, which suggests that this is also not a bottleneck. More work is required to understand possible bottlenecks. The CPU s frequency, 2.5GHz, and FSB bandwidth, 1333MHz, could be increased by replacing with higher performance CPUs.
12 12 Our understanding of UDP throughput management and monitoring has improved to the point where tests with the HPAD (AGIPD) detector front end interface should be made. 9 Follow up work and open questions This section lists points that should be looked at. Why does UDP = TCP performance, should it? Look at the error rates on the TCP are they the same as the packet losses with UDP. RDMA how to use and other features of the NICS. TOE has not been used on the Chelsio board, what does it promise, what does it achieve. Multiple NICs usage as a single logical unit. Bonding. Find the reason for seeing only 75% of the maximum throughput. Multiple versions of xperf (2 senders & 2 receivers) 7.5Gb/s increased to 7.9Gb/s i.e. with two program cores at 50% - where is the bottleneck bus, memory, PCIe, elsewhere? BUS/logic analyzer on PCIe buses. Finish not yet complete sections. 10 Glossary Terms used and what they mean (not yet complete): MSI-X = Message Signalled Interrupts. This is an alternate form of interrupt differing from the traditional pin-signalled system; instead of asserting a given IRQ pin, a message is written to a segment of system memory. Each device can have from 1 to 32 unique memory locations (MSI, up to 2048 with the newer MSI-X standard) in which to write MSI events to. An advantage of the MSI system is that data can be pushed along with the MSI event, allowing for greater functionality 11 Parameters In this section describes the parameters which may have been modified during measurements. Not yet complete. tcp_window_scaling = controls whether the size of the sliding window is allowed to be dynamically modified. (disable: sysctl w net.ipv4.tcp_window_scaling=0; enable: =1;)
13 13 12 Performance testing tools used This section describes the tools used for performance measurement. Not yet complete Netperf Throughput server client test program. Note that the T5,5 client option determines which cores will be used by the server and client programs. Server: netserver Client: netperf -P1 -l 60 -H T 5,5 TCP STREAM TEST from ( ) port 0 AF_INET to ( ) port 0 AF_INET : cpu bind Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec Iperf Throughput server client test program. Server: iperf -s -i5 Client: iperf -c l256k -i5 -t60 -fm Client connecting to , TCP port 5001 TCP window size: 9.54 MByte (default) [ 3] local port connected with port 5001 [ ID] Interval Transfer Bandwidth [ 3] sec 4368 MBytes 874 MBytes/sec [ ID] Interval Transfer Bandwidth [ 3] sec 4360 MBytes 872 MBytes/sec [ ID] Interval Transfer Bandwidth [ 3] sec MBytes 872 MBytes/sec
14 Xperf Throughput server client test program. Written for the tests performed here. During the initial measurement stage very different results were determined for the Chelsio and Intel NICs, which made us suspicious of the netperf and iperf test programs needless to say this was unfounded and the observed effects were due to using incorrect kernels with NIC drivers, etc. Server:./srv Client:./clnt -H i5 -t100 -l256k Remote host: Remote port: Network protocol: TCP Message length: Test time: 60 Report time: 5 Connections: 1 Some network related parameters are... net.core.wmem_max = net.core.rmem_max = net.core.wmem_default = net.core.rmem_default = net.core.max_backlog = net.core.optmem_max = net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_mem = net.ipv4.tcp_wmem = net.ipv4.tcp_rmem = T 5s: 4.6 GB MB/s (7.33 Gb/s) Pkg# 0 PLoss# 0 (0.0000%), Msg# 0 MLoss# 0 (0.0000%) 1m:0.08 5m: m:0.09 T 55s: 50.3 GB MB/s (7.31 Gb/s) Pkg# 0 PLoss# 0 (0.0000%), Msg# 0 MLoss# 0 (0.0000%) 1m:0.44 5m: m:0.12 ********************** A L A R M ************************ T 60s: 54.8 GB MB/s (7.31 Gb/s) Pkg# 0 PLoss# 0 (0.0000%), Msg# 0 MLoss# 0 (0.0000%) 1m:0.49 5m: m: /05/08 17:48:18 TCP connection ' :60516' <--X--> ' :12345' T 60s: 54.8 GB MB/s (7.31 Gb/s) Pkg# 0 PLoss# 0 (0.0000%), Msg# 0 MLoss# 0 (0.0000%) 1m:0.49 5m: m:0.12
15 15 13 Monitoring tools used This section lists the monitoring tools used Mpstat Used to monitor core cpu usage and interrupt rates. mpstat -P ALL 5 05:30:18 PM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s 05:30:23 PM all :30:23 PM :30:23 PM :30:23 PM :30:23 PM :30:23 PM :30:23 PM :30:23 PM :30:23 PM :30:23 PM Interrupt inspection grep eth2 /proc/interrupts 4336: PCI- MSI-edge eth0-lsc 4337: PCI-MSIedge eth0-rx0 4338: PCI-MSIedge eth0-tx Setting interrupt affinity echo 1 > /proc/irq/4337/smp_affinity echo 2 > /proc/irq/4338/smp_affinity 13.4 Other monitoring tools Not yet complete. vmstat = netstat = lspci = oprofile =
16 16 modinfo = sar = 14 Configuration tools The tools used to modify configurations are listed. Note yet complete sysctl sysctl -w net.core.wmem_max= ifconfig ifconfig eth2 down ifconfig eth2 mtu 9600 txqueuelen up 14.3 ethtool ethtool -k eth Other configuration tools setpci = /proc = 15 Packet handling The motivation for including this section is to give an understanding of what happens to a packet, from its arrival at the NIC until it appears in the user buffer. The current, stolen, implementation of the section is a placeholder. The following paragraphs are stolen from the Interrupt Swizzling Solution for Intel 5000 Chipset Series-based Platforms - Application Note ( ). and should apply to Linux kernel s 2.4.
17 17 Figure 3 Intel E5000 handbook showing bus connectivity of host main board A typical network processing scenario, a network packet is received over an Ethernet port. The NIC controlling that port deposits the packet in the memory and signals the CPU through a HW interrupt. The CPU runs an interrupt service routine (ISR) to attend the HW interrupt and update the HW status of the network device. At the end of the ISR, the CPU queues the follow-up work for processing the received packet(s) by signaling a SoftIRQ. SoftIRQs, being highly privileged threads in Linux, are closely guarded resources. The networking stack has one SoftIRQ permanently assigned in the architecture of Linux kernel. SoftIRQs are a per CPU resource, hence there is one networking SoftIRQ per available core in SMP platforms. As mentioned, all packet processing, that is, queueing and dequeing packets from the network interface, IPv4/IPv6 processing and TCP/UDP processing in Linux happens in the SoftIRQ context. In fact, several layer 3 and layer 4 networking functions such as firewall, VPN, proxy and intrusion detection are processed in the same SoftIRQ context upon receiving a packet. Since SoftIRQs are CPU specific and are triggered through ISRs for specific HW devices, functions such as routing (or layer 3 forwarding) have affinity at the software level to a specific CPU core. This is the same core that receives the hardware interrupt upon receipt of a packet.
18 18 16 References Additional information sources which were found to be useful whilst pursuing the aim of this note are listed below Gb Ethernet, M.Wagner, Red Hat. (18-20th June 2008). A slide show presentation about 10Gb Ethernet performance, tuning and functionality on RHEL5.X (Red Hat Enterprise Linux release 5.X). 2. Anatomy of real-time Linux architectures. From soft to hard realtime. (15 Apr 2008). M.Tim Jones ([email protected]), Consultant Engineer, Emulex Corp. linux/index.html?s_tact=105agx99&s_cmp=cp
Linux NIC and iscsi Performance over 40GbE
Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71
Evaluation of 40 Gigabit Ethernet technology for data servers
Evaluation of 40 Gigabit Ethernet technology for data servers Azher Mughal, Artur Barczyk Caltech / USLHCNet CHEP-2012, New York http://supercomputing.caltech.edu Agenda Motivation behind 40GE in Data
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Measuring Cache and Memory Latency and CPU to Memory Bandwidth
White Paper Joshua Ruggiero Computer Systems Engineer Intel Corporation Measuring Cache and Memory Latency and CPU to Memory Bandwidth For use with Intel Architecture December 2008 1 321074 Executive Summary
Configuring Your Computer and Network Adapters for Best Performance
Configuring Your Computer and Network Adapters for Best Performance ebus Universal Pro and User Mode Data Receiver ebus SDK Application Note This application note covers the basic configuration of a network
Hardware Level IO Benchmarking of PCI Express*
White Paper James Coleman Performance Engineer Perry Taylor Performance Engineer Intel Corporation Hardware Level IO Benchmarking of PCI Express* December, 2008 321071 Executive Summary Understanding the
The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group
The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links Filippo Costa on behalf of the ALICE DAQ group DATE software 2 DATE (ALICE Data Acquisition and Test Environment) ALICE is a
Evaluation Report: Emulex OCe14102 10GbE and OCe14401 40GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters
Evaluation Report: Emulex OCe14102 10GbE and OCe14401 40GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters Evaluation report prepared under contract with Emulex Executive Summary As
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
PE310G4BPi40-T Quad port Copper 10 Gigabit Ethernet PCI Express Bypass Server Intel based
PE310G4BPi40-T Quad port Copper 10 Gigabit Ethernet PCI Express Bypass Server Intel based Description Silicom s quad port Copper 10 Gigabit Ethernet Bypass server adapter is a PCI-Express X8 network interface
AGIPD Interface Electronic Prototyping
AGIPD Interface Electronic Prototyping P.Goettlicher I. Sheviakov M. Zimmer - Hardware Setup, Measurements - ADC (AD9252 14bit x 8ch x 50msps ) readout - Custom 10G Ethernet performance - Conclusions Test
Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser
Network Performance Optimisation and Load Balancing Wulf Thannhaeuser 1 Network Performance Optimisation 2 Network Optimisation: Where? Fixed latency 4.0 µs Variable latency
The Bus (PCI and PCI-Express)
4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the
Tuning 10Gb network cards on Linux
Tuning 10Gb network cards on Linux A basic introduction to concepts used to tune fast network cards Breno Henrique Leitao IBM [email protected] Abstract The growth of Ethernet from 10 Mbit/s to
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Intel DPDK Boosts Server Appliance Performance White Paper
Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks
Adobe LiveCycle Data Services 3 Performance Brief
Adobe LiveCycle ES2 Technical Guide Adobe LiveCycle Data Services 3 Performance Brief LiveCycle Data Services 3 is a scalable, high performance, J2EE based server designed to help Java enterprise developers
The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology
3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related
An Analysis of 8 Gigabit Fibre Channel & 10 Gigabit iscsi in Terms of Performance, CPU Utilization & Power Consumption
An Analysis of 8 Gigabit Fibre Channel & 1 Gigabit iscsi in Terms of Performance, CPU Utilization & Power Consumption An Analysis of 8 Gigabit Fibre Channel & 1 Gigabit iscsi 1 Key Findings Third I/O found
How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1
How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1 Technical Brief v1.0 February 2013 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Windows Server 2008 R2 Hyper V. Public FAQ
Windows Server 2008 R2 Hyper V Public FAQ Contents New Functionality in Windows Server 2008 R2 Hyper V...3 Windows Server 2008 R2 Hyper V Questions...4 Clustering and Live Migration...5 Supported Guests...6
High-Density Network Flow Monitoring
Petr Velan [email protected] High-Density Network Flow Monitoring IM2015 12 May 2015, Ottawa Motivation What is high-density flow monitoring? Monitor high traffic in as little rack units as possible
XMC Modules. XMC-6260-CC 10-Gigabit Ethernet Interface Module with Dual XAUI Ports. Description. Key Features & Benefits
XMC-6260-CC 10-Gigabit Interface Module with Dual XAUI Ports XMC module with TCP/IP offload engine ASIC Dual XAUI 10GBASE-KX4 ports PCIe x8 Gen2 Description Acromag s XMC-6260-CC provides a 10-gigabit
OpenFlow with Intel 82599. Voravit Tanyingyong, Markus Hidell, Peter Sjödin
OpenFlow with Intel 82599 Voravit Tanyingyong, Markus Hidell, Peter Sjödin Outline Background Goal Design Experiment and Evaluation Conclusion OpenFlow SW HW Open up commercial network hardware for experiment
Knut Omang Ifi/Oracle 19 Oct, 2015
Software and hardware support for Network Virtualization Knut Omang Ifi/Oracle 19 Oct, 2015 Motivation Goal: Introduction to challenges in providing fast networking to virtual machines Prerequisites: What
EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation
PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
PCI Express and Storage. Ron Emerick, Sun Microsystems
Ron Emerick, Sun Microsystems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature
PCI Express* Ethernet Networking
White Paper Intel PRO Network Adapters Network Performance Network Connectivity Express* Ethernet Networking Express*, a new third-generation input/output (I/O) standard, allows enhanced Ethernet network
Networking Driver Performance and Measurement - e1000 A Case Study
Networking Driver Performance and Measurement - e1000 A Case Study John A. Ronciak Intel Corporation [email protected] Ganesh Venkatesan Intel Corporation [email protected] Jesse Brandeburg
PCI Express IO Virtualization Overview
Ron Emerick, Oracle Corporation Author: Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and
KVM PERFORMANCE IMPROVEMENTS AND OPTIMIZATIONS. Mark Wagner Principal SW Engineer, Red Hat August 14, 2011
KVM PERFORMANCE IMPROVEMENTS AND OPTIMIZATIONS Mark Wagner Principal SW Engineer, Red Hat August 14, 2011 1 Overview Discuss a range of topics about KVM performance How to improve out of the box experience
Storage Architectures. Ron Emerick, Oracle Corporation
PCI Express PRESENTATION and Its TITLE Interfaces GOES HERE to Flash Storage Architectures Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the
HANIC 100G: Hardware accelerator for 100 Gbps network traffic monitoring
CESNET Technical Report 2/2014 HANIC 100G: Hardware accelerator for 100 Gbps network traffic monitoring VIKTOR PUš, LUKÁš KEKELY, MARTIN ŠPINLER, VÁCLAV HUMMEL, JAN PALIČKA Received 3. 10. 2014 Abstract
Supermicro Ethernet Switch Performance Test Straightforward Public Domain Test Demonstrates 40Gbps Wire Speed Performance
Straightforward Public Domain Test Demonstrates 40Gbps Wire Speed Performance SSE-X3348S/SR Test report- 40G switch port performance on Supermicro SSE-X3348 ethernet switches. SSE-X3348T/TR 1. Summary
Chapter 5 Cubix XP4 Blade Server
Chapter 5 Cubix XP4 Blade Server Introduction Cubix designed the XP4 Blade Server to fit inside a BladeStation enclosure. The Blade Server features one or two Intel Pentium 4 Xeon processors, the Intel
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
How To Configure Your Computer With A Microsoft X86 V3.2 (X86) And X86 (Xo) (Xos) (Powerbook) (For Microsoft) (Microsoft) And Zilog (X
System Configuration and Order-information Guide RX00 S4 February 008 Front View CD-ROM Drive (Optional) Hard Disk Bay Back View PCI Slot 0/00/000BASE-T connector Serial Port Display Mouse Keyboard Inside
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308 Laura Knapp WW Business Consultant [email protected] Applied Expert Systems, Inc. 2011 1 Background
QoS & Traffic Management
QoS & Traffic Management Advanced Features for Managing Application Performance and Achieving End-to-End Quality of Service in Data Center and Cloud Computing Environments using Chelsio T4 Adapters Chelsio
How System Settings Impact PCIe SSD Performance
How System Settings Impact PCIe SSD Performance Suzanne Ferreira R&D Engineer Micron Technology, Inc. July, 2012 As solid state drives (SSDs) continue to gain ground in the enterprise server and storage
Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database
Performance Advantages for Oracle Database At a Glance This Technical Brief illustrates that even for smaller online transaction processing (OLTP) databases, the Sun 8Gb/s Fibre Channel Host Bus Adapter
Pump Up Your Network Server Performance with HP- UX
Pump Up Your Network Server Performance with HP- UX Paul Comstock Network Performance Architect Hewlett-Packard 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject
Introduction to MPIO, MCS, Trunking, and LACP
Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the
Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards
Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards From Results on an HP rx4640 Server Table of Contents June 2005 Introduction... 3 Recommended Use Based on Performance and Design...
SIDN Server Measurements
SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources
Performance Tuning Guidelines
Performance Tuning Guidelines for Mellanox Network Adapters Revision 1.15 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED
PCI Express Overview. And, by the way, they need to do it in less time.
PCI Express Overview Introduction This paper is intended to introduce design engineers, system architects and business managers to the PCI Express protocol and how this interconnect technology fits into
Intel Data Direct I/O Technology (Intel DDIO): A Primer >
Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
Comparison of 40G RDMA and Traditional Ethernet Technologies
Comparison of 40G RDMA and Traditional Ethernet Technologies Nichole Boscia, Harjot S. Sidhu 1 NASA Advanced Supercomputing Division NASA Ames Research Center Moffett Field, CA 94035-1000 [email protected]
Network Diagnostic Tools. Jijesh Kalliyat Sr.Technical Account Manager, Red Hat 15th Nov 2014
Network Diagnostic Tools Jijesh Kalliyat Sr.Technical Account Manager, Red Hat 15th Nov 2014 Agenda Network Diagnostic Tools Linux Tcpdump Wireshark Tcpdump Analysis Sources of Network Issues If a system
Tyche: An efficient Ethernet-based protocol for converged networked storage
Tyche: An efficient Ethernet-based protocol for converged networked storage Pilar González-Férez and Angelos Bilas 30 th International Conference on Massive Storage Systems and Technology MSST 2014 June
PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,
General Pipeline System Setup Information
Product Sheet General Pipeline Information Because of Pipeline s unique network attached architecture it is important to understand each component of a Pipeline system in order to create a system that
GigE Vision cameras and network performance
GigE Vision cameras and network performance by Jan Becvar - Leutron Vision http://www.leutron.com 1 Table of content Abstract...2 Basic terms...2 From trigger to the processed image...4 Usual system configurations...4
Network Function Virtualization Packet Processing Performance of Virtualized Platforms with Linux* and Intel Architecture
Intel Network Builders Intel Xeon Processor-based Servers Packet Processing Performance of Virtualized Platforms with Linux* and Intel Architecture Network Function Virtualization Packet Processing Performance
Optimizing Linux Performance
Optimizing Linux Performance Why is Performance Important Regular desktop user Not everyone has the latest hardware Waiting for an application to open Application not responding Memory errors Extra kernel
PARALLELS SERVER BARE METAL 5.0 README
PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal
How To Improve Performance On A Linux Based Router
Linux Based Router Over 10GE LAN Cheng Cui, Chui-hui Chiu, and Lin Xue Department of Computer Science Louisiana State University, LA USA Abstract High speed routing with 10Gbps link speed is still very
Performance Tuning Guidelines for PowerExchange for Microsoft Dynamics CRM
Performance Tuning Guidelines for PowerExchange for Microsoft Dynamics CRM 1993-2016 Informatica LLC. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying,
Datacenter Operating Systems
Datacenter Operating Systems CSE451 Simon Peter With thanks to Timothy Roscoe (ETH Zurich) Autumn 2015 This Lecture What s a datacenter Why datacenters Types of datacenters Hyperscale datacenters Major
Tuning for Web Serving on the Red Hat Enterprise Linux 6.4 KVM Hypervisor. Barry Arndt - Linux Technology Center, IBM
Tuning for Web Serving on the Red Hat Enterprise Linux 6.4 KVM Hypervisor Barry Arndt - Linux Technology Center, IBM Version 1.0 March 2013 1 IBM, the IBM logo, and ibm.com are trademarks or registered
High Performance Packet Processing
Intel Network Builders Intel Xeon Processor-based Servers Packet Processing on Cloud Platforms High Performance Packet Processing High Performance Packet Processing on Cloud Platforms using Linux* with
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
The Performance Analysis of Linux Networking Packet Receiving
The Performance Analysis of Linux Networking Packet Receiving Wenji Wu, Matt Crawford Fermilab CHEP 2006 [email protected], [email protected] Topics Background Problems Linux Packet Receiving Process NIC &
PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance
TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
OpenSPARC T1 Processor
OpenSPARC T1 Processor The OpenSPARC T1 processor is the first chip multiprocessor that fully implements the Sun Throughput Computing Initiative. Each of the eight SPARC processor cores has full hardware
Virtualised MikroTik
Virtualised MikroTik MikroTik in a Virtualised Hardware Environment Speaker: Tom Smyth CTO Wireless Connect Ltd. Event: MUM Krackow Feb 2008 http://wirelessconnect.eu/ Copyright 2008 1 Objectives Understand
Know your Cluster Bottlenecks and Maximize Performance
Know your Cluster Bottlenecks and Maximize Performance Hands-on training March 2013 Agenda Overview Performance Factors General System Configuration - PCI Express (PCIe) Capabilities - Memory Configuration
Quad-Core Intel Xeon Processor
Product Brief Intel Xeon Processor 7300 Series Quad-Core Intel Xeon Processor 7300 Series Maximize Performance and Scalability in Multi-Processor Platforms Built for Virtualization and Data Demanding Applications
These sub-systems are all highly dependent on each other. Any one of them with high utilization can easily cause problems in the other.
Abstract: The purpose of this document is to describe how to monitor Linux operating systems for performance. This paper examines how to interpret common Linux performance tool output. After collecting
W H I T E P A P E R : T E C H N I C A L. Understanding and Configuring Symantec Endpoint Protection Group Update Providers
W H I T E P A P E R : T E C H N I C A L Understanding and Configuring Symantec Endpoint Protection Group Update Providers Martial Richard, Technical Field Enablement Manager Table of Contents Content Introduction...
TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to
Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.
Communicating with devices
Introduction to I/O Where does the data for our CPU and memory come from or go to? Computers communicate with the outside world via I/O devices. Input devices supply computers with data to operate on.
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
CQG/LAN Technical Specifications. January 3, 2011 Version 2011-01
CQG/LAN Technical Specifications January 3, 2011 Version 2011-01 Copyright 2011 CQG Inc. All rights reserved. Information in this document is subject to change without notice. Windows XP, Windows Vista,
H ARDWARE C ONSIDERATIONS
H ARDWARE C ONSIDERATIONS for Sidewinder 5 firewall software Dell Precision 530 This document provides information on specific system hardware required for running Sidewinder firewall software on a Dell
VMware vsphere 4.1 Networking Performance
VMware vsphere 4.1 Networking Performance April 2011 PERFORMANCE STUDY Table of Contents Introduction... 3 Executive Summary... 3 Performance Enhancements in vsphere 4.1... 3 Asynchronous Transmits...
PERFORMANCE TUNING ORACLE RAC ON LINUX
PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database
Application Server Platform Architecture. IEI Application Server Platform for Communication Appliance
IEI Architecture IEI Benefits Architecture Traditional Architecture Direct in-and-out air flow - direction air flow prevents raiser cards or expansion cards from blocking the air circulation High-bandwidth
Storage benchmarking cookbook
Storage benchmarking cookbook How to perform solid storage performance measurements Stijn Eeckhaut Stijn De Smet, Brecht Vermeulen, Piet Demeester The situation today: storage systems can be very complex
VXLAN Performance Evaluation on VMware vsphere 5.1
VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5
Multi-core and Linux* Kernel
Multi-core and Linux* Kernel Suresh Siddha Intel Open Source Technology Center Abstract Semiconductor technological advances in the recent years have led to the inclusion of multiple CPU execution cores
50. DFN Betriebstagung
50. DFN Betriebstagung IPS Serial Clustering in 10GbE Environment Tuukka Helander, Stonesoft Germany GmbH Frank Brüggemann, RWTH Aachen Slide 1 Agenda Introduction Stonesoft clustering Firewall parallel
Introduction to PCI Express Positioning Information
Introduction to PCI Express Positioning Information Main PCI Express is the latest development in PCI to support adapters and devices. The technology is aimed at multiple market segments, meaning that
System Configuration and Order-information Guide ECONEL 100 S2. March 2009
System Configuration and Orderinformation Guide ECONEL 100 S2 March 2009 Front View DVDROM Drive 5 inch Bay Floppy Disk Drive Back View Mouse Keyboard Serial Port Display 10/100/1000BASET Connector Inside
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Measure wireless network performance using testing tool iperf
Measure wireless network performance using testing tool iperf By Lisa Phifer, SearchNetworking.com Many companies are upgrading their wireless networks to 802.11n for better throughput, reach, and reliability,
Open Flow Controller and Switch Datasheet
Open Flow Controller and Switch Datasheet California State University Chico Alan Braithwaite Spring 2013 Block Diagram Figure 1. High Level Block Diagram The project will consist of a network development
Distributed applications monitoring at system and network level
Distributed applications monitoring at system and network level Monarc Collaboration 1 Abstract Most of the distributed applications are presently based on architectural models that don t involve real-time
