Low Latency Test Report Ultra-Low Latency 10GbE Switch and Adapter Testing Bruce Tolley, PhD, Solarflare
|
|
|
- Preston Gilmore
- 10 years ago
- Views:
Transcription
1 Ultra-Low Latency 10GbE Switch and Adapter Testing Bruce Tolley, PhD, Solarflare Testing conducted by Solarflare and Fujitsu shows that ultra low latency application-level performance can be achieved with commercially available 10G Ethernet Switch and Server Adapter Products ABSTRACT Solarflare, the pioneer in high-performance, low-latency 10GbE server networking and application acceleration middleware solutions and Fujitsu recently completed switch to server latency and message rate testing. The test found that the Solarflare SFN5122F server adapter and Solarflare OpenOnload application acceleration middleware in combination with the Fujitsu XG GbE Switch achieved 4.5 microsecond mean latency over 10GbE in a TCP latency test with 64 byte packet size typical of market data applications. The Fujitsu switch is one of the lowest latency switches on the market, demonstrating mean latency in this test of 427 nanoseconds. With back to back servers, the server adapter achieved impressive minimum TCP latencies as low at 4.1 microseconds. The latency of the overall system was also very deterministic with 99% of the messages being delivered with a latency of less than 4.5 microseconds at near zero jitter. Solarflare and Fujitsu measured the performance of TCP and UDP messaging using Solarflare developed benchmarks with commercially available products: the Solarflare SFN5122F 10 Gigabit server adapters and Fujitsu 10 Gigabit switch. The test platform used servers and processors typically found in use by financial firms today. THE NEED FOR LOW LATENCY IN AUTOMATED, REAL-TIME TRADING The rapid expansion of automated and algorithmic trading has increased the critical role of network and server technology in market trading, first in the requirement for low latency and second in the need for high throughput in order to process the high volume of transactions. Given the critical demand for information technology, private and public companies that are active in electronic markets continue to invest in their LAN and WAN networks and in the server infrastructure that carries market data and trading information. In some trading markets, firms can profit from less than one millisecond of advantage over competitors, which drives them to search for sub-millisecond optimizations in their trading systems. The spread of automated trading across geographies and asset classes, and the resulting imperative to exploit arbitrage opportunities based on latency, has increased the focus on if not created an obsession with latency. [email protected] US x2050 UK x5530
2 Page 2 of 9 With this combination of forces, technologists, IT and data center managers in the financial services sector are constantly evaluating new technologies that can optimize performance. One layer of the technology stack that receives continuous scrutiny is messaging, i.e., the transmission of information from one process to another, over networks with specialized home-grown or commercial messaging middleware. The ability to handle predictably the rapid growth of data traffic in the capital markets continues to be a major concern. As markets become more volatile, large volumes of traffic can overwhelm systems, increase latency unpredictably, and throw off application algorithms. Within limits, some algorithmic trading applications are more sensitive to the predictability of latency than they are to the mean latency. Therefore it is very important for the network solution stack to perform not just with low latency but with bounded, predictable latency. Solarflare and Fujitsu demonstrate in this paper that because of its low and predictable latency, a UDP multicast network built with 10 Gigabit Ethernet (10GigE) can become the foundation of messaging systems used in the financial markets. FINANCIAL SERVICES AND OTHER APPLICATIONS THAT CAN TAKE ADVANTAGE OF LOW-LATENCY UDP MULTICAST Messaging middleware applications were named above as one key financial services application that produce and consume large amounts of multicast data which can take advantage of low-latency UDP multicast. Other applications in the financial services industry that can take advantage of low-latency UDP multicast data include: Market data feed handler software that takes as input multicast data feeds and uses multicasting as the distribution mechanism Caching/data distribution applications that use multicast for cache creation or to maintain data state Any application that makes use of multicast and requires high packets per second (pps) rates, low data distribution latency, low CPU utilization, and increased application scalability CLOUD NETWORKING AND BROADER MARKET IMPLICATIONS OF LOW LATENCY TO SUPPORT REAL-TIME APPLICATIONS As stated above, the low-latency UDP multicast solution provided by Fujitsu switches and Solarflare server adapters can provide compelling benefit to any application that depends on multicast traffic where additional requirements exist for high throughput, low-latency data distribution, low CPU utilization, and increased application scalability. Typical applications that benefit from lower latency include medical imaging, radar and other data acquisition systems, and seismic image processing in oil and gas exploration. Yet moving forward, cloud networking is a market segment where requirements for throughput, low latency and real time application performance will also develop. The increasing deployment and build out of both public and private clouds will drive the increased adoption of social networking and Web 2.0 applications. These cloud applications will incorporate real-time media and video distribution and will need lower latency applications for both business to consumer (B2C) and business to business (B2B) needs. Perhaps more fundamentally, the need for real-time, high-speed analytics and search of large and often unstructured data sets will create demand for low latency and real time application response.
3 Page 3 of 9 SOLARFLARE S OPENONLOAD BENEFITS Solarflare and Fujitsu measured the latency performance of messaging using Solarflare-developed benchmarks with commercially available products: the Solarflare SFN5122F SFP+ 10 Gigabit Ethernet server adapters, Solarflare OpenOnload application acceleration middleware, and the Fujitsu 10 Gigabit switch. A list of the hardware configurations and the benchmarks used is attached as an Appendix. The test platform used servers and processors typically found in use by financial firms today. The tests described below were run both switch to server adapter and server adapter to server adapter. The adapters were run in kernel mode and in OpenOnload mode. OpenOnload is an open-source high-performance application acceleration middleware product. By improving the CPU efficiency of the servers, OpenOnload enables applications to leverage more server resources, resulting in dramatically accelerated application performance without changing the existing IT infrastructure. Using standard Ethernet, the solution combines state-of-the-art Ethernet switching and server technologies that dramatically accelerate applications. OpenOnload performs network processing at user-level and is binary-compatible with existing applications that use TCP/UDP with BSD sockets. It comprises a user-level shared library that implements the protocol stack, and a supporting kernel module. FUNDAMENTAL FINDINGS size mean min median max 99%ile stddev tcp latency ool switch tcp latency b2b ool Exhibit 1: Half-Round Trip TCP Latency in Nano Seconds Exhibit 1 summarizes results of TCP latency testing. The Fujitsu switch is an ultra-low latency switch contributing a mean latency of 427 nanoseconds to the system latency. In the testing for the 64 byte message sizes typical of market data messaging systems, very low latency was observed. The Solarflare server adapter in combination with the Fujitsu switch achieved mean latency of 4.5 microseconds. The Solarflare adapters back to back achieved an amazingly low latency of 4.1 microseconds. This latency was also very deterministic with 99% of the messages achieving a mean latency of less than 5.0 microseconds in the switch to server adapter configuration. By comparing the two values in the standard deviation column, the Fujitsu switch appears to contribute near zero jitter.
4 Page 4 of 9 Exhibit 2: TCP Latency Performance vs. Message Size Exhibit 2 above extends the data from Exhibit 1, where the x axis represents message size in bytes and the y axis represents TCP half round trip latency in microseconds. The data shows that the system demonstrated very low and deterministic latency from small to large message sizes up to 1500 bytes. The data plot also shows low latency in both kernel and OpenOnload mode. The Solarflare adapters back to back, achieved an amazingly minimum latency of 3.96 microseconds. This latency was very deterministic as demonstrated by the flatness of the curve as the message size approaches 1500 bytes. At a message size of 2048 bytes, mean latency was still low at microseconds. Exhibit 3 next page plots UDP half round trip latency where the x axis represents message size in bytes and the y axis represents latency in microseconds. The data shows that the system demonstrated very low and deterministic latency from small up to very large message sizes of 1024 bytes. The data plot also shows very low latency in both kernel and OpenOnload mode. In OpenOnload mode with the switch and server adapter, minimum latencies go as low as microseconds for 64 byte packet messages, and with the server adapters back to back, as low as microseconds.
5 Page 5 of 9 Exhibit 3: UDP Half Round Trip Latency Exhibit 4 next page shows two plots of performance versus desired data rate of UDP multicast performance with and without OpenOnload. This test simulates a traffic pattern that is common in financial services applications. In the test, the system streams small messages from a sender to a receiver. The receiver reflects a small proportion of the messages back to the sender which the sender uses to calculate the round-trip latency. The x axis shows the target message rate that the sender is trying to achieve. The y axis shows one-way latency (including a switch) and the achieved message rate. The kernel results are measured with Solarflare server adapters without OpenOnload. The plot combines results from three runs: kernel to kernel, OpenOnload to kernel, and OpenOnload to OpenOnload. The OpenOnload to kernel test is needed in order to fully stress the kernel receive performance.
6 Page 6 of 9 Exhibit 4: Message Rates Achieved with Upstream UDP The top plot labeled Round Trip Latency shows the improved, deterministic low latency achieved with the Solarflare adapter, OpenOnload, and the Fujitsu switch. The y axis shows the round trip latency while the x axis shows the desired message rate in millions of messages per second at the receiver. With OpenOnload, not only is the system performing at much lower latency, but the latency is predictable and deterministic over the range of expected message rates. This is precisely the attribute desired in trading systems or any other application demanding real time performance. The second plot in Exhibit 4, Message Rate Achieved shows the Solarflare OpenOnload system's ability to scale and perform as the message rate is increased. This is in contrast to the kernel stack where the greater CPU processing overheads of the stack limit performance as higher levels of load are put on the system. THE SOLARFLARE SOLUTION The SFN5122F 10GbE SFP+ server adapter is the most frequently recommended 10Gb Ethernet server adapter for trading networks in New York, London and Chicago. In both kernel and OpenOnload modes, the adapter supports high frequency trading and HPC applications which demand very low latency and very high throughput. Tests were performed using the standard kernel TCP/IP stack as well as Solarflare OpenOnload mode. OpenOnload is an open-source application acceleration middleware product from Solarflare. As Exhibit 5 shows, OpenOnload provides an optimized TCP/UDP stack in the application domain which can communicate directly with the
7 Page 7 of 9 Solarflare server adapter. With OpenOnload, the adapter provides the application with protected, direct access to the network, bypassing the OS kernel, and hence reducing networking overheads and latency. The typical TCP/UDP/IP stack resides as part of the kernel environment and suffers performance penalties due to context switching between the kernel and application layers, the copying of data between kernel and application buffers, and high levels of interrupt handling. Exhibit 5: The Solarflare Architecture for OpenOnload The kernel TCP/UDP/IP and OpenOnload stacks can co-exist in the same environment. This co-existence which is based on Solarflare s hybrid architecture allows applications that require a kernel-based stack to run simultaneously with OpenOnload. This coexistence feature was leveraged as part of the testing where the benchmarks were run through both the kernel and OpenOnload stacks in back to back fashion using the same build and without having to reboot the systems. FUJITSU XG 2600 ULTRA-LOW-LATENCY 10GBE SWITCH INDUSTRY LEADING LAYER 2 PERFORMANCE, SCALABILITY, AND HIGH AVAILABILITY The Fujitsu XG 2600 offers industry-leading low latency in 10GbE switching. The XG 2600 is powered by Fujitsu s 4th generation Fujitsu AXEL-X2 ASIC, with measured chip-level performance as low as 300ns latency. The switch features 26-port SFP+ solution in compact 1RU high form factor and is highly reliable, with configurable airflow options (front-torear or rear-to-front airflow). With flexible SFP+ interfaces (Copper Twinax, SR and LR), it allows you to build the ideal network architecture for your requirements. The XG 2600 also features the lowest power consumption, fewer than 5 watts per port, which is best in its class. The Fujitsu XG GbE switch is designed for high-performance systems.
8 Page 8 of 9 CONCLUSIONS The findings analyzed in this white paper represent the results of testing of transmit latency of a configuration with the Solarflare server adapter with OpenOnload and the Fujitsu switch at transmission rates up to 3 million messages/second (mps). For the 64 byte message sizes typical of market data messaging systems, very low latency was observed: TCP latency Mean did not exceed 4.5 microseconds with the switch TCP latency Mean did not exceed 4.1 microseconds without the switch For TCP messaging traffic, 99th percentile did not exceed 5.0 microseconds with switch For UDP latency server to server, mean latency was as low at 3.9 microseconds The system also demonstrated low jitter for both TCP and UDP traffic which delivers a very predictable messaging system. With Solarflare s 10GbE server adapter and OpenOnload application acceleration middleware, and Fujitsu s 10GbE switch, off-the-shelf 10GbE hardware can be used as the foundation of messaging systems for electronic trading with no need to re-write applications or use proprietary, specialized hardware. By enabling financial trading customers to implement highly predictable systems, Solarflare s and Fujitsu s10gbe solutions provide competitive advantages and offer increased overall speeds, more accurate trading and higher profits. By leveraging the server adapter with OpenOnload, IT managers are able to build market data delivery systems designed to handle increasing message rates, while reducing message latency and jitter between servers. SUMMARY Solarflare Communications and Fujitsu have demonstrated performance levels with 10 Gigabit Ethernet that enables Ethernet to serve as the foundation of messaging systems used in the financial markets. Now, financial firms can use offthe-shelf Ethernet, TCP/IP, UDP and multicast solutions to accelerate market data systems without requiring the implementation of new wire protocols or changing applications. With off the shelf 10GbE gear, Solarflare s server adapter and the Fujitsu switch can be used as the foundation of messaging systems for electronic trading and the support of lowlatency UDP multicast with no need to re-write applications or use proprietary, specialized hardware. IT and data center managers can deploy plain old Ethernet solutions today. Moving forward, Solarflare Communications and Fujitsu also expect high performance 10G Ethernet solutions with lowlatency UDP multicast to become an important technology component of public and private clouds that rely on real time media distribution for business to consumer and business to business applications.
9 Page 9 of 9 ABOUT SOLARFLARE Solarflare is the pioneer in high-performance, low-latency 10GbE server networking solutions. Our architectural approach combines hardware and software to deliver high-performance adapter products and application acceleration middleware for superior performance in a wide range of applications, including financial services, high performance computing (HPC), cloud computing, storage and virtualized data centers. Solarflare s products are used globally by many of the world's largest companies, and are available from leading distributors and value-added resellers, as well as from Dell and HP. Solarflare is headquartered in Irvine, California and has an R&D site in Cambridge, UK. For more information on Solarflare products, visit or contact [email protected]. ABOUT FUJITSU FRONTECH NORTH AMERICA Fujitsu Frontech North America Inc. provides market-focused IT solutions that enable customers to achieve their business objectives through integrated offerings for retail point of sale, self checkout solutions and kiosks, mobile, cloud computing, digital media solutions, biometric authentication technology, Ethernet switches, RFID tags and currency handling solutions. Working closely with a diverse network of partners for sales, systems integration, and managed services, FFNA delivers industry-specific solutions for the manufacturing, retail, healthcare, government, education, financial services, enterprise and communications sectors throughout North America. For more information, visit ABOUT THE AUTHOR Bruce Tolley is responsible for solutions marketing at Solarflare including technical, event, and partner marketing activities. Previously, he served as Solarflare's Vice President of Marketing, and earlier Director of Product Management. Prior to joining Solarflare, Bruce was a Senior Product Line Manager at Cisco Systems where he managed the Ethernet transceiver business that included product life cycle management and the launch of Metro Ethernet, 10 Gigabit, and 1000BASE-T switch solutions. Prior to Cisco, he served in various product and marketing management roles at 3Com Corporation. Formerly Study Group Chair of the IEEE 802.3aq 10GBASE-LRM standards project, Bruce is a frequent contributor to the IEEE Ethernet standards projects. He is currently serving as Secretary and Director of the Ethernet Alliance. He is an alumnus of Selwyn College, Cambridge University and Tuebingen University, Germany and holds MA and Ph.D degrees from Stanford University and an MBA from Haas School of Business, UC Berkeley. APPENDIX: LIST OF BENCHMARKS Solarflare s test procedures are documented in its Low Latency Quickstart Guide available for download from the driver download page at
Ultra-Low Latency, High Density 48 port Switch and Adapter Testing
Ultra-Low Latency, High Density 48 port Switch and Adapter Testing Testing conducted by Solarflare and Force10 shows that ultra low latency application level performance can be achieved with commercially
10Gb Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Latency-Sensitive Applications
10Gb Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Latency-Sensitive Applications Testing conducted by Solarflare and Arista Networks reveals single-digit
10G Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Future Cloud Applications
10G Ethernet: The Foundation for Low-Latency, Real-Time Financial Services Applications and Other, Future Cloud Applications Testing conducted by Solarflare Communications and Arista Networks shows that
White Paper Solarflare High-Performance Computing (HPC) Applications
Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Informatica Ultra Messaging SMX Shared-Memory Transport
White Paper Informatica Ultra Messaging SMX Shared-Memory Transport Breaking the 100-Nanosecond Latency Barrier with Benchmark-Proven Performance This document contains Confidential, Proprietary and Trade
Intel DPDK Boosts Server Appliance Performance White Paper
Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution
Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction
TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to
Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.
Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers
Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Introduction Adoption of
QoS & Traffic Management
QoS & Traffic Management Advanced Features for Managing Application Performance and Achieving End-to-End Quality of Service in Data Center and Cloud Computing Environments using Chelsio T4 Adapters Chelsio
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
Boosting Data Transfer with TCP Offload Engine Technology
Boosting Data Transfer with TCP Offload Engine Technology on Ninth-Generation Dell PowerEdge Servers TCP/IP Offload Engine () technology makes its debut in the ninth generation of Dell PowerEdge servers,
APACHE HADOOP PLATFORM HARDWARE INFRASTRUCTURE SOLUTIONS
APACHE HADOOP PLATFORM BIG DATA HARDWARE INFRASTRUCTURE SOLUTIONS 1 BIG DATA. BIG CHALLENGES. BIG OPPORTUNITY. How do you manage the VOLUME, VELOCITY & VARIABILITY of complex data streams in order to find
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Switching Architectures for Cloud Network Designs
Overview Networks today require predictable performance and are much more aware of application flows than traditional networks with static addressing of devices. Enterprise networks in the past were designed
PCI Express High Speed Networks. Complete Solution for High Speed Networking
PCI Express High Speed Networks Complete Solution for High Speed Networking Ultra Low Latency Ultra High Throughput Maximizing application performance is a combination of processing, communication, and
How To Improve Your Communication With An Informatica Ultra Messaging Streaming Edition
Messaging High Performance Peer-to-Peer Messaging Middleware brochure Can You Grow Your Business Without Growing Your Infrastructure? The speed and efficiency of your messaging middleware is often a limiting
The Next Generation Trading Infrastructure What You Will Learn. The State of High-Performance Trading Architecture
The Next Generation Trading Infrastructure What You Will Learn Optimizing latency and throughput is essential to high performance trading companies. They are challenged, however, by the complexity of their
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Introduction to SFC91xx ASIC Family
White Paper Introduction to SFC91xx ASIC Family Steve Pope, PhD, Chief Technical Officer, Solarflare Communications David Riddoch, PhD, Chief Software Architect, Solarflare Communications The 91xx family
RoCE vs. iwarp Competitive Analysis
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
Comparison of the real-time network performance of RedHawk Linux 6.0 and Red Hat operating systems
A Concurrent Real-Time White Paper 2881 Gateway Drive Pompano Beach, FL 33069 (954) 974-1700 real-time.ccur.com Comparison of the real-time network performance of RedHawk Linux 6.0 and Red Hat operating
Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers
White Paper Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th As the deployment of bandwidth-intensive applications such as public and private cloud computing continues to increase, IT administrators
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure Motti Beck Director, Marketing [email protected] Michael Kagan Chief Technology Officer [email protected]
High-Performance Automated Trading Network Architectures
High-Performance Automated Trading Network Architectures Performance Without Loss Performance When It Counts Introduction Firms in the automated trading business recognize that having a low-latency infrastructure
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
10GBASE T for Broad 10_Gigabit Adoption in the Data Center
10GBASE T for Broad 10_Gigabit Adoption in the Data Center Contributors Carl G. Hansen, Intel Carrie Higbie, Siemon Yinglin (Frank) Yang, Commscope, Inc 1 Table of Contents 10Gigabit Ethernet: Drivers
Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration
Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration White Paper By: Gary Gumanow 9 October 2007 SF-101233-TM Introduction With increased pressure on IT departments to do
Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB-02499-001_v02
Technical Brief DualNet with Teaming Advanced Networking October 2006 TB-02499-001_v02 Table of Contents DualNet with Teaming...3 What Is DualNet?...3 Teaming...5 TCP/IP Acceleration...7 Home Gateway...9
Low-latency market data delivery to seize competitive advantage. WebSphere Front Office for Financial Markets: Fast, scalable access to market data.
Low-latency market data delivery to seize competitive advantage WebSphere Front Office for Financial Markets: Fast, scalable access to market data. Data from SIAC, NASDAQ and NYSE indicates a 158% increase
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing
Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing WHITE PAPER Highlights: There is a large number of HPC applications that need the lowest possible latency for best performance
Arista Application Switch: Q&A
Arista Application Switch: Q&A Q. What is the 7124FX Application Switch? A. The Arista 7124FX is a data center class Ethernet switch based on the Arista 7124SX, our ultra low-latency L2/3/4 switching platform.
Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data
White Paper Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data What You Will Learn Financial market technology is advancing at a rapid pace. The integration of
A Low Latency Library in FPGA Hardware for High Frequency Trading (HFT)
A Low Latency Library in FPGA Hardware for High Frequency Trading (HFT) John W. Lockwood, Adwait Gupte, Nishit Mehta (Algo-Logic Systems) Michaela Blott, Tom English, Kees Vissers (Xilinx) August 22, 2012,
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
D1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June [email protected],[email protected],
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]
The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology
3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related
How To Monitor And Test An Ethernet Network On A Computer Or Network Card
3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel
Architecting Low Latency Cloud Networks
Architecting Low Latency Cloud Networks Introduction: Application Response Time is Critical in Cloud Environments As data centers transition to next generation virtualized & elastic cloud architectures,
Arista 7060X and 7260X series: Q&A
Arista 7060X and 7260X series: Q&A Product Overview What are the 7060X & 7260X series? The Arista 7060X and 7260X series are purpose-built 40GbE and 100GbE data center switches in compact and energy efficient
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Achieving Low-Latency Security
Achieving Low-Latency Security In Today's Competitive, Regulatory and High-Speed Transaction Environment Darren Turnbull, VP Strategic Solutions - Fortinet Agenda 1 2 3 Firewall Architecture Typical Requirements
A way towards Lower Latency and Jitter
A way towards Lower Latency and Jitter Jesse Brandeburg [email protected] Intel Ethernet BIO Jesse Brandeburg A senior Linux developer in the Intel LAN Access Division,
Improving Quality of Service
Improving Quality of Service Using Dell PowerConnect 6024/6024F Switches Quality of service (QoS) mechanisms classify and prioritize network traffic to improve throughput. This article explains the basic
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
Accurate End-to-End Performance Management Using CA Application Delivery Analysis and Cisco Wide Area Application Services
White Paper Accurate End-to-End Performance Management Using CA Application Delivery Analysis and Cisco Wide Area Application Services What You Will Learn IT departments are increasingly relying on best-in-class
Ixia xstream TM 10. Aggregation, Filtering, and Load Balancing for qgbe/10gbe Networks. Aggregation and Filtering DATA SHEET
Ixia xstream TM 10 Aggregation, Filtering, and Load Balancing for qgbe/10gbe Networks The Ixia xstream 10 is a network packet broker for monitoring high-speed network traffic, letting you share the network
100 Gigabit Ethernet is Here!
100 Gigabit Ethernet is Here! Introduction Ethernet technology has come a long way since its humble beginning in 1973 at Xerox PARC. With each subsequent iteration, there has been a lag between time of
Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation
Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance Alex Ho, Product Manager Innodisk Corporation Outline Innodisk Introduction Industry Trend & Challenge
Over the past few years organizations have been adopting server virtualization
A DeepStorage.net Labs Validation Report Over the past few years organizations have been adopting server virtualization to reduce capital expenditures by consolidating multiple virtual servers onto a single
The Future of Cloud Networking. Idris T. Vasi
The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
How To Switch A Layer 1 Matrix Switch On A Network On A Cloud (Network) On A Microsoft Network (Network On A Server) On An Openflow (Network-1) On The Network (Netscout) On Your Network (
Software- Defined Networking Matrix Switching January 29, 2015 Abstract This whitepaper describes a Software- Defined Networking use case, using an OpenFlow controller and white box switches to implement
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
Resource Utilization of Middleware Components in Embedded Systems
Resource Utilization of Middleware Components in Embedded Systems 3 Introduction System memory, CPU, and network resources are critical to the operation and performance of any software system. These system
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
The Elements of GigE Vision
What Is? The standard was defined by a committee of the Automated Imaging Association (AIA). The committee included Basler AG and companies from all major product segments in the vision industry. The goal
What s New in 2013. Mike Bailey LabVIEW Technical Evangelist. uk.ni.com
What s New in 2013 Mike Bailey LabVIEW Technical Evangelist Building High-Performance Test, Measurement and Control Systems Using PXImc Jeremy Twaits Regional Marketing Engineer Automated Test & RF National
Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database
Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a
Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE
White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.
Comparing the Network Performance of Windows File Sharing Environments
Technical Report Comparing the Network Performance of Windows File Sharing Environments Dan Chilton, Srinivas Addanki, NetApp September 2010 TR-3869 EXECUTIVE SUMMARY This technical report presents the
White Paper: Validating 10G Network Performance
White Paper: Validating 10G Network Performance TABLE OF CONTENTS» Introduction» The Challenge of Testing 10G Connections» 10G Network Performance Test with Path Visibility» What Settings to Use» Measuring
Arista and Leviton Technology in the Data Center
Visit: www.leviton.com/aristanetworks Arista and Leviton Technology in the Data Center Customer Support: 1-408-547-5502 1-866-476-0000 Product Inquiries: 1-408-547-5501 1-866-497-0000 General Inquiries:
The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.
White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization
N5 NETWORKING BEST PRACTICES
N5 NETWORKING BEST PRACTICES Table of Contents Nexgen N5 Networking... 2 Overview of Storage Networking Best Practices... 2 Recommended Switch features for an iscsi Network... 2 Setting up the iscsi Network
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Clearing the Way for VoIP
Gen2 Ventures White Paper Clearing the Way for VoIP An Alternative to Expensive WAN Upgrades Executive Overview Enterprises have traditionally maintained separate networks for their voice and data traffic.
Monitoring high-speed networks using ntop. Luca Deri <[email protected]>
Monitoring high-speed networks using ntop Luca Deri 1 Project History Started in 1997 as monitoring application for the Univ. of Pisa 1998: First public release v 0.4 (GPL2) 1999-2002:
Low Latency Market Data and Ticker Plant Technology. SpryWare.
Low Latency Market Data and Ticker Plant Technology. SpryWare. Direct Feeds Ultra Low Latency Extreme Capacity High Throughput Fully Scalable SpryWare s state-of-the-art Ticker Plant technology enables
The Role of Precise Timing in High-Speed, Low-Latency Trading
The Role of Precise Timing in High-Speed, Low-Latency Trading The race to zero nanoseconds Whether measuring network latency or comparing real-time trading data from different computers on the planet,
Isilon IQ Network Configuration Guide
Isilon IQ Network Configuration Guide An Isilon Systems Best Practice Paper August 2008 ISILON SYSTEMS Table of Contents Cluster Networking Introduction...3 Assumptions...3 Cluster Networking Features...3
EVALUATING INDUSTRIAL ETHERNET
EVALUATING INDUSTRIAL ETHERNET WHAT IS STANDARD? Written by: Shuo Zhang Networks Marketing Rockwell Automation As industrial automation systems evolve, industrial Ethernet is becoming increasingly popular
High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features
UDC 621.395.31:681.3 High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features VTsuneo Katsuyama VAkira Hakata VMasafumi Katoh VAkira Takeyama (Manuscript received February 27, 2001)
Intel Data Direct I/O Technology (Intel DDIO): A Primer >
Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
A Low-Latency Solution for High- Frequency Trading from IBM and Mellanox
Effective low-cost High-Frequency Trading solutions from IBM and Mellanox May 2011 A Low-Latency Solution for High- Frequency Trading from IBM and Mellanox Vinit Jain IBM Systems and Technology Group Falke
Auspex Support for Cisco Fast EtherChannel TM
Auspex Support for Cisco Fast EtherChannel TM Technical Report 21 Version 1.0 March 1998 Document 300-TC049, V1.0, 980310 Auspex Systems, Inc. 2300 Central Expressway Santa Clara, California 95050-2516
THE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS
THE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS 159 APPLICATION NOTE Bruno Giguère, Member of Technical Staff, Transport & Datacom Business Unit, EXFO As end-users are migrating
iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters
W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized
ENTERPRISE CONVERGED NETWORK SOLUTION. Deliver a quality user experience, streamline operations and reduce costs
ENTERPRISE CONVERGED NETWORK SOLUTION Deliver a quality user experience, streamline operations and reduce costs THE NEW CHALLENGES IN DELIVERING A HIGH-QUALITY USER EXPERIENCE Key trends are driving new
Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
Memory Channel Storage ( M C S ) Demystified. Jerome McFarland
ory nel Storage ( M C S ) Demystified Jerome McFarland Principal Product Marketer AGENDA + INTRO AND ARCHITECTURE + PRODUCT DETAILS + APPLICATIONS THE COMPUTE-STORAGE DISCONNECT + Compute And Data Have
LOAD BALANCING IN THE MODERN DATA CENTER WITH BARRACUDA LOAD BALANCER FDC T740
LOAD BALANCING IN THE MODERN DATA CENTER WITH BARRACUDA LOAD BALANCER FDC T740 Balancing Web traffic across your front-end servers is key to a successful enterprise Web presence. Web traffic is much like
Interoperability Testing and iwarp Performance. Whitepaper
Interoperability Testing and iwarp Performance Whitepaper Interoperability Testing and iwarp Performance Introduction In tests conducted at the Chelsio facility, results demonstrate successful interoperability
