Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect

Size: px
Start display at page:

Download "Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect"

Transcription

1 White PAPER Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect Introduction: High Performance Data Centers As the data center continues to evolve to meet rapidly escalating demands for higher levels of performance and resource virtualization, three rather distinct networking requirements have emerged. As shown in Figure 1, the typical server in a high performance data center may require connection to three switching fabrics: a LAN for connecting users and general networking, an inter-processor communications (IPC) fabric for low latency message passing between compute cluster applications, and a storage fabric for access to shared storage/file resources. While Ethernet is the de facto technology for the general purpose LAN, it has been widely considered as a sub-optimal switching fabric for very high performance cluster interconnect IPC (e.g., for those MPI parallel applications that require extremely low end-to-end latency). In particular, GbE s end-to-end messagepassing latency in the range of microseconds is significantly higher than the <10 microseconds for the more Figure 1. Data center switching fabrics specialized cluster interconnects. In the absence of congestion, end-to-end latency includes two basic components: 1) sending/receiving delay in the end systems/nics for moving the message between the application buffers and the network, and 2) network latency involved in serializing the message and switching it through network nodes to its destination. In spite of the latency issues, the cost effectiveness of Gigabit Ethernet has resulted in its being chosen as the IPC interconnect for more than 55 percent of the cluster computers on the June 2007 Top500 list. Therefore, the higher latency of GbE doesn t prevent excellent performance on parallel benchmarks. It has, however, prevented GbE clusters from capturing any of the Top50 positions on the current Top500 list. As a block storage fabric, Gigabit Ethernet with iscsi has offered a good medium performance solution for access to networked block storage. However, iscsi has not yet posed a serious threat to Fibre Channel due to the lower bandwidth of GbE verses 2 and 4 Gbps Fibre Channel, plus Ethernet s traditionally higher CPU utilization, higher memory bandwidth consumption and higher latency. Recent developments in 10 Gigabit Ethernet NIC hardware and low latency 10 GbE switching are positioning 10 Gigabit Ethernet to offer bandwidth and latency performance that is on a par with, or even surpasses, that of the more specialized interconnects, including Fibre Channel and InfiniBand. These developments will allow network managers to minimize the complexity of the data center by using Ethernet as the "converged" switching technology that can meet the highest performance requirements of each type of data center traffic FORCE10 NETWORKS, INC. [ P AGE 1 OF 6 ]

2 Low Latency Cut-through Ethernet Switching With cut-through Ethernet switching, the switch delays the packet only long enough to read the Layer 2 packet header and make a forwarding decision based on the destination address and other header fields (e.g., VLAN tag and 802.1p priority field). Switching latency is reduced because packet processing is restricted to the header itself rather than the entire packet. Cut-through Ethernet switching is not new. In fact, the first Ethernet switches, the EtherSwitches introduced by Kalpana in 1990, were cut-through switches. The fundamental drawback of cut-through switches is that they can t identify and discard all corrupted packets because the packets are forwarded before the FCS field is received and thus no CRC calculation can be performed. Corrupted packets were common in the early days of Ethernet, partly as the result of collisions in the flat shared Ethernet LANs that were in vogue at the time. In spite of this shortcoming, cutthrough switching was the predominant mode of Ethernet switching until the development of Fast Ethernet in The emergence of Fast Ethernet and 10/100 Mbps Ethernet switching eliminated much of the latency advantage of cutthrough switching because speed changes (i.e., switching from 10 Mbps Ethernet to 100 Mbps Ethernet) force the switch to store-and-forward the packet rather than cutting it though. Fast Ethernet also reduced packet serialization time by a factor of 10, which further eroded the latency advantage of cut-through switching. In this earlier era of networking, the predominate networked applications ( , file transfer, and NetWare file access) were not sensitive to switch latencies in the microsecond range. The advent of 10/100 switching, followed by the subsequent development of 10/100/1000 Layer 2/3 switching, completely eliminated cut-through switching as a viable forwarding mode for general purpose switched LANs. Cut-through switching, however, is currently enjoying a resurgence as the switching mode for specialized 10 GbE data center interconnects for those applications that are highly sensitive to switch latencies in the microsecond range. Cut-through switching is applicable for data center interconnects that do not require speed changes and are limited enough in diameter/extent to have very low packet error rates. Low error rates mean that only a negligible amount of bandwidth will be wasted on bad packets, which will be dropped by the hardware engines in modern NICs rather than by the cut-through switches. Figure 2. Network latency for a store-and-forward vs. a cut-through switch Ethernet Network Latency: Cut Through vs. Store-and-Forward Figure 2 illustrates the differences in network latency between store-and-forward and cut-through switches. The store-and-forward switch has to wait for the full packet serialization by the sending NIC before it begins packet processing. The switch latency for a packet is measured as the delay between the last bit into the switch and the first bit out (LIFO) of the switch. After packet processing is complete, the switch has to re-serialize the packet to deliver it to its destination. Therefore, neglecting the small propagation delay over short data center cabling (~5.5 ns/meter), the network latency for a one hop store-and-forward (SAF) switched network is: SAF Network = 2 x + LIFO Latency (Serialization Delay) Switch Latency In the case of the cut-through switch at the bottom of Figure 2, the switch can begin forwarding the packet to the destination system as soon as the destination address is mapped to the appropriate output port. This means that the cut-through switch can overlap the serialization of the outgoing packet from the switch to the destination end system with the serialization of the incoming packet. The switch latency is measured as the delay between the first bit in and the first bit out (FIFO) of the switch. Therefore, the corresponding network latency through a one hop cut-through (CT) switched network is : CT Network = Serialization + FIFO Latency Delay Switch Latency 2007 FORCE10 NETWORKS, INC. [ P AGE 2 OF 6 ]

3 The network latency for the cut-through switch is lower for two reasons: 1) only one instance of serialization delay is encountered, which can be a significant factor for larger frame sizes, and 2) the switch latency is lower because of the inherent simplicity of CT switching. Typical switch latency for a 10 GbE store-and-forward switch is in the range of 5-35 microseconds, while the switch latency for a 10 GbE cut-through switch is typically only nanoseconds. Depending on the degree of over-subscription that can be tolerated in a specific application, the number of access switches may be either increased or decreased in proportion to the aggregation switches. As the diameter of the interconnect network increases, the advantage of CT switching becomes more significant. For example, for a 3-hop network, the network latencies for the two types of switches are: SAF Network = 4 x + 3 x (LIFO Latency (Serialization Delay) Switch Latency) CT Network = Serialization + 3 x (FIFO Latency Delay Switch Latency) Cut Through Switching for Cluster and Storage Interconnect In its current incarnation, cut-through switching is generally based on a single chip, non-blocking switch implementation. Because of the limitations of VLSI technology, the number of high speed switch ports per chip is typically in the range of 8-32, irrespective of the technology involved (Fibre Channel, InfiniBand, Myrinet, or 10 Gigabit Ethernet). For example, the Force10 S2410 cut-through switch has GbE ports. A standalone switch can be viewed as a single-tier cut-through switching fabric and provides the advantages of cut-through switching for smaller clusters and storage networks. The most common technique for building larger cluster and storage interconnect fabrics from smaller switching elements is to aggregate multiple switches in a multi-tiered configuration. One approach is to build a Layer 2 interconnect fabric using cut-through switches in both the aggregation and the access tiers of the network, as shown in Figure 3. In this particular example, dual redundant aggregation switches are used, which means that for any particular access VLAN, one of the aggregation switches is the primary switch and the other switch plays a secondary or backup role. For a particular VLAN, only the LAG to the primary aggregation switch would carry traffic under normal operating conditions, and the second uplink LAG would be blocked per the rapid spanning tree protocol (RSTP). Figure 3. Two-tier cut-through switching fabric with Force10 S2410 switches Much larger 10 GbE clusters can be built by aggregating networks similar to the one shown in Figure 3 with a higher density 10 GbE switch such as the Force10 Networks E-Series, as shown in Figure 4. In this configuration, the E-Series ports used by the access switches could be configured to provide either SAF Layer 2 switching or routing among the low latency portions of the network. Figure 4. Aggregation of S2410 switches with an E-Series switch 2007 FORCE10 NETWORKS, INC. [ P AGE 3 OF 6 ]

4 S2410 CT E-Series SAF Hybrid CT/SAF Switch Latency 64B 300 ns 16 µs N/A 1500B 300 ns 18.3 µs N/A 9252B 300 ns 33.5 µs N/A Network Latency 1-Tier/1 Hop (Figure 3) 64B 351 ns N/A N/A 1500B 1.5 µs N/A N/A 9252B 8.3 µs N/A N/A Network Latency 2-Tier/3 Hops (Figure 4) 64B 951 ns N/A N/A 1500B 2.1 µs N/A N/A 9252B 8.9 µs N/A N/A Network Latency 3-Tier/5 Hops (Figure 5) 64B N/A N/A 17 µs 1500B N/A N/A 35.6 µs 9252B N/A N/A 57.2 µs Intelligent 10 GbE NICs The traditional Ethernet NIC relies on the host CPU to handle the TCP/IP protocol processing. With a softwarebased protocol stack, the host CPU is shared between the application and the network. The generally accepted rule of thumb is that each bit per second of network traffic consumes a Hz of CPU bandwidth. Therefore, a software protocol stack causes CPU utilization to become very high with network bandwidths in excess of 1 Gbps, resulting in the CPU itself becoming the bottleneck that limits throughput and adds significantly to end-to-end latency. Table 1. Switch and network latencies for Force10 CT and SAF switches Further expansion of the cluster network can be achieved by Layer 3 meshing of E-Series switches with the added advantages of the superior load sharing and path recovery capabilities of equal cost multi-path (ECMP) routing. Table 1 presents the measured switch latencies for the Force10 S2410 cut-through switch compared to a Force10 TeraScale E-Series 10 GbE store-and-forward switch and summarizes the network latencies that may be expected with the switch fabrics shown in Figures 3 and 4. The network latencies were calculated using the formulas presented earlier in the paper. The cut-through switch and network latencies compare favorably with those of InfiniBand, Fibre Channel and the other specialized interconnections. It is also clear from the table that cut-through switching significantly reduces network latency as a contributor to 10 GbE end-to-end latency, allowing latency reduction efforts to focus on the delays within the end systems. Network designs that maximize the diameter of the cut-through portions will exhibit significantly lower latency. Figure 5. Intelligent Ethernet NIC protocol stacks Over the last few years, vendors of intelligent Ethernet NICs, together with the RDMA Consortium and the IETF, have been working on specifications for hardware-accelerated TCP/IP protocol stacks that can support the everincreasing performance demands of general purpose networking, cluster IPC and storage interconnect over GbE and 10 GbE. The efforts have focused on the technologies shown in Figure 5, which provides a highly simplified overview of hardware-assisted end system protocol stacks. A dedicated TCP offload engine (TOE) is incorporated in the NIC. The TOE offloads essentially all the TCP/IP processing from the host CPU. This greatly reduces CPU utilization and also reduces latency because the protocols are executed in hardware rather than software. Tests have shown that 10 GbE TOE NICs together with cut-through 10 GbE switching are capable of end-to-end, small message latency of about 10 microseconds for MPI over sockets FORCE10 NETWORKS, INC. [ P AGE 4 OF 6 ]

5 TOE also provides a major improvement for throughputs of 10 GbE web and NAS servers. The tests cited above also show that TOE NICs can improve Web server performance as much as 10X verses conventional 10 GbE NICs. Remote direct memory access (RDMA) is a mechanism of offloading data-copy operations from the CPU by allowing direct data transfers between the network and user memory space. RDMA conserves memory bandwidth by eliminating TCP/IP copy requirements (zero copy) and kernel transitions. RDMA is of particular benefit to movement of large blocks of data, such as that required for storage interconnect. The IETF has developed a standard called iwarp (Internet wide area RDMA protocol) for RDMA over TCP/IP. The iwarp specification includes TOE functionality in order to eliminate the major sources of network overhead in TCP/IP processing. Separate work is also progressing on an RDMA interface for the network file system (NFS). With RDMA, the Ethernet CPU utilization for message transfers is expected to be reduced to less than 10 persent regardless of the message throughput level. iscsi protocol acceleration is an implementation of the iscsi protocols in hardware that offload compute-intensive iscsi operations from the CPU, improving the throughput and transaction rates and reducing the CPU utilization. The RDMA Consortium has developed the iscsi Extensions for RDMA (iser) protocol, which provides a datamover architecture (DA) extension that offloads iscsi data movement and placement operations to the RDMA hardware, while the control aspects remain in software. This may turn out to be a more flexible approach than full iscsi offload to NIC hardware. would be based on cut-through 10 GbE switching. If the highest levels of performance are required, the servers will have separate network interfaces to each fabric. There are two possible scenarios for the evolution of data center NICs: 1) Servers may use three different types of intelligent network interfaces: a TOE NIC optimized for general networking, an iwarp RDMA NIC optimized for low-latency IPC, and an iscsi NIC optimized for storage networking. 2) Alternatively, a "converged" NIC that supports the full offload suite may emerge as the most cost-effective solution. In this case, a single model of high performance 10 GbE NIC could be deployed throughout the data center with its mode of offload functionality selected by the data center manager as part of the installation/ configuration process. Layer 2 Fabric Interconnection Enabled by Converged NICs With a converged NIC it is possible to configure servers with a single 10 GbE NIC for access to both cut-through and store-and-forward LAN fabrics. One example of how this may be done is illustrated in Figure 6. The server with The sockets direct protocol (SDP) is part of the RDMA specification that allows unmodified sockets applications to gain direct access to RDMA-optimized data transfers. Direct access from the application to the RDMA hardware can also help to reduce latency. Development of offload NICs for 10 GbE has proven to be a fairly significant challenge for Ethernet adapter vendors due to the complexities of the protocols involved. However, there are now a number of vendors in the market place with proven high performance products that exploit various technologies described in this section. As 10 GbE is adopted as the mainstream converged-switching technology, the high performance data center of the near future will resemble that of Figure 1, with three separate 10 GbE switch fabrics. The general purpose LAN fabric would be based on storeand-forward switching, while the IPC and storage fabrics Figure 6. Example of fabric interconnection with converged NICs 2007 FORCE10 NETWORKS, INC. [ P AGE 5 OF 6 ]

6 the converged NIC is connected to the cut-through access switch. Some of the uplink ports of each access switch are allocated to intra-fabric connectivity with the remainder allocated to inter-fabric connectivity. The same is true for the CT aggregation switches. In this example, separate VLANs are configured for intrafabric and inter-fabric traffic. The direct LAG connection from each access switch to a core LAN switch is designated as the primary path (P) for the inter-fabric traffic to the root switch in the SAF LAN aggregation layer, with a secondary (backup) path (S) directed through one of the CT aggregation switches. For simplicity, the P and S paths are shown only for the access switch on the left of the diagram. A configuration similar to this one has the advantage that the inter-fabric LAN traffic normally bypasses the low latency fabric, eliminating the possibility of contention for low latency bandwidth. In the event of a failure in the primary inter-fabric path, traffic would fail-over to the secondary path where Layer 2 QoS could be configured to provide strict priority to the intra-fabric traffic over general purpose LAN traffic. Conclusion With the advent of 10 GbE cut-through switching and intelligent 10 GbE NICs, Ethernet is ready to challenge the specialized low latency interconnect technologies for performance supremacy in IPC cluster interconnect and storage interconnect. These developments are clearing the way for network managers to simplify the technology makeup of the data center and leverage the cost-effectiveness of Ethernet to minimize TCO without any compromise in performance. As data centers move toward virtualized applications and infrastructure, the combination of lower port prices and lower latency will be crucial drivers of the adoption of 10 Gigabit Ethernet as the converged data center switching technology. With the performance benefits of offload NICs applicable to general purpose servers, such as the Web server front ends of the data center and NAS servers, TOE and iwarp NICs can be expected to ride a fairly steep cost-reduction curve that will benefit lower volume applications such as IPC cluster interconnect and high-end storage networking. For the typical large enterprise, probably the most significant impact of low-latency 10 GbE networking will be the large cost savings that will be realized through deployment of high performance iscsi SANs and clustered storage as alternatives to DAS, Fibre Channel SANs or InfiniBand SANs. Force10 Networks, Inc. 350 Holger Way San Jose, CA USA PHONE FACSIMILE 2007 Force10 Networks, Inc. All rights reserved. Force10 Networks and E-Series are registered trademarks, and Force10, the Force10 logo, Reliable Business Networking, Force10 Reliable Networking, C-Series, P-Series, S-Series, EtherScale, TeraScale, FTOS, SFTOS, StarSupport and Hot Lock are trademarks of Force10 Networks, Inc. All other company names are trademarks of their respective holders. Information in this document is subject to change without notice. Certain features may not yet be generally available. Force10 Networks, Inc. assumes no responsibility for any errors that may appear in this document. WP v FORCE10 NETWORKS, INC. [ P AGE 6 OF 6 ]

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

IEEE Congestion Management Presentation for IEEE Congestion Management Study Group

IEEE Congestion Management Presentation for IEEE Congestion Management Study Group IEEE Congestion Management Presentation for IEEE Congestion Management Study Group Contributors Jeff Lynch IBM Gopal Hegde -- Intel 2 Outline Problem Statement Types of Traffic & Typical Usage Models Traffic

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

Block based, file-based, combination. Component based, solution based

Block based, file-based, combination. Component based, solution based The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates

More information

Building Enterprise-Class Storage Using 40GbE

Building Enterprise-Class Storage Using 40GbE Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance

More information

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller

More information

ethernet alliance Data Center Bridging Version 1.0 November 2008 Authors: Steve Garrison, Force10 Networks Val Oliva, Foundry Networks

ethernet alliance Data Center Bridging Version 1.0 November 2008 Authors: Steve Garrison, Force10 Networks Val Oliva, Foundry Networks Data Center Bridging Version 1.0 November 2008 Authors: Steve Garrison, Force10 Networks Val Oliva, Foundry Networks Gary Lee, Fulcrum Microsystems Robert Hays, Intel Ethernet Alliance 3855 SW 153 Drive

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

RoCE vs. iwarp Competitive Analysis

RoCE vs. iwarp Competitive Analysis WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

White Paper Solarflare High-Performance Computing (HPC) Applications

White Paper Solarflare High-Performance Computing (HPC) Applications Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters

More information

3G Converged-NICs A Platform for Server I/O to Converged Networks

3G Converged-NICs A Platform for Server I/O to Converged Networks White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview

More information

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

Fibre Channel over Ethernet in the Data Center: An Introduction

Fibre Channel over Ethernet in the Data Center: An Introduction Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification

More information

Server Consolidation and Remote Disaster Recovery: The Path to Lower TCO and Higher Reliability

Server Consolidation and Remote Disaster Recovery: The Path to Lower TCO and Higher Reliability Server Consolidation and Remote Disaster Recovery: The Path to Lower TCO and Higher Reliability Executive Summary: To minimize TCO for server / data center consolidation and optimize disaster recovery

More information

Managing Data Center Power and Cooling

Managing Data Center Power and Cooling White PAPER Managing Data Center Power and Cooling Introduction: Crisis in Power and Cooling As server microprocessors become more powerful in accordance with Moore s Law, they also consume more power

More information

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance

More information

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,

More information

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.

More information

SMB Direct for SQL Server and Private Cloud

SMB Direct for SQL Server and Private Cloud SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server

More information

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building

More information

Building a Scalable Storage with InfiniBand

Building a Scalable Storage with InfiniBand WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5

More information

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used

More information

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014 Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

Data Center Architecture Overview

Data Center Architecture Overview 1 CHAPTER Note Important Updated content: The Cisco Virtualized Multi-tenant Data Center CVD (http://www.cisco.com/go/vmdc) provides updated design guidance including the Cisco Nexus Switch and Unified

More information

Fibre Channel Over and Under

Fibre Channel Over and Under Fibre Channel over : A necessary infrastructure convergence By Deni Connor, principal analyst April 2008 Introduction Consolidation of IT datacenter infrastructure is happening in all forms. IT administrators

More information

Virtualizing the SAN with Software Defined Storage Networks

Virtualizing the SAN with Software Defined Storage Networks Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands

More information

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Accelerating High-Speed Networking with Intel I/O Acceleration Technology White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing

More information

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel

More information

Unified Fabric: Cisco's Innovation for Data Center Networks

Unified Fabric: Cisco's Innovation for Data Center Networks . White Paper Unified Fabric: Cisco's Innovation for Data Center Networks What You Will Learn Unified Fabric supports new concepts such as IEEE Data Center Bridging enhancements that improve the robustness

More information

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based

More information

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

10GBASE T for Broad 10_Gigabit Adoption in the Data Center 10GBASE T for Broad 10_Gigabit Adoption in the Data Center Contributors Carl G. Hansen, Intel Carrie Higbie, Siemon Yinglin (Frank) Yang, Commscope, Inc 1 Table of Contents 10Gigabit Ethernet: Drivers

More information

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) Extreme Networks White Paper Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) The evolution of the data center fabric has been well documented. The

More information

Isilon IQ Network Configuration Guide

Isilon IQ Network Configuration Guide Isilon IQ Network Configuration Guide An Isilon Systems Best Practice Paper August 2008 ISILON SYSTEMS Table of Contents Cluster Networking Introduction...3 Assumptions...3 Cluster Networking Features...3

More information

Ultra Low Latency Data Center Switches and iwarp Network Interface Cards

Ultra Low Latency Data Center Switches and iwarp Network Interface Cards WHITE PAPER Delivering HPC Applications with Juniper Networks and Chelsio Communications Ultra Low Latency Data Center Switches and iwarp Network Interface Cards Copyright 20, Juniper Networks, Inc. Table

More information

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development Motti@mellanox.com

More information

Latency Considerations for 10GBase-T PHYs

Latency Considerations for 10GBase-T PHYs Latency Considerations for PHYs Shimon Muller Sun Microsystems, Inc. March 16, 2004 Orlando, FL Outline Introduction Issues and non-issues PHY Latency in The Big Picture Observations Summary and Recommendations

More information

PE10G2T Dual Port Fiber 10 Gigabit Ethernet TOE PCI Express Server Adapter Broadcom based

PE10G2T Dual Port Fiber 10 Gigabit Ethernet TOE PCI Express Server Adapter Broadcom based PE10G2T Dual Port Fiber 10 Gigabit Ethernet TOE PCI Express Server Adapter Broadcom based Description Silicom 10Gigabit TOE PCI Express server adapters are network interface cards that contain multiple

More information

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center info@globalknowledge.net www.globalknowledge.net Planning for the Redeployment of

More information

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.

More information

Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB-02499-001_v02

Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB-02499-001_v02 Technical Brief DualNet with Teaming Advanced Networking October 2006 TB-02499-001_v02 Table of Contents DualNet with Teaming...3 What Is DualNet?...3 Teaming...5 TCP/IP Acceleration...7 Home Gateway...9

More information

Ethernet Fabrics: An Architecture for Cloud Networking

Ethernet Fabrics: An Architecture for Cloud Networking WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic

More information

IP ETHERNET STORAGE CHALLENGES

IP ETHERNET STORAGE CHALLENGES ARISTA SOLUTION GUIDE IP ETHERNET STORAGE INSIDE Oveview IP Ethernet Storage Challenges Need for Massive East to West Scalability TCP Incast Storage and Compute Devices Interconnecting at Different Speeds

More information

Chapter 1 Reading Organizer

Chapter 1 Reading Organizer Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small

More information

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage Flash memory has been used to transform consumer devices such as smartphones, tablets, and ultranotebooks, and now it

More information

IP SAN BEST PRACTICES

IP SAN BEST PRACTICES IP SAN BEST PRACTICES PowerVault MD3000i Storage Array www.dell.com/md3000i TABLE OF CONTENTS Table of Contents INTRODUCTION... 3 OVERVIEW ISCSI... 3 IP SAN DESIGN... 4 BEST PRACTICE - IMPLEMENTATION...

More information

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices A Dell Technical White Paper Dell Symantec THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND

More information

Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School

Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School Introduction to Infiniband Hussein N. Harake, Performance U! Winter School Agenda Definition of Infiniband Features Hardware Facts Layers OFED Stack OpenSM Tools and Utilities Topologies Infiniband Roadmap

More information

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5

More information

Addressing Scaling Challenges in the Data Center

Addressing Scaling Challenges in the Data Center Addressing Scaling Challenges in the Data Center DELL PowerConnect J-Series Virtual Chassis Solution A Dell Technical White Paper Dell Juniper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY

More information

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized

More information

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Internet SCSI (iscsi)

More information

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman. WHITE PAPER All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.nl 1 Monolithic shared storage architectures

More information

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment

More information

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Scaling 10Gb/s Clustering at Wire-Speed

Scaling 10Gb/s Clustering at Wire-Speed Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400

More information

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms

More information

Layer 3 Network + Dedicated Internet Connectivity

Layer 3 Network + Dedicated Internet Connectivity Layer 3 Network + Dedicated Internet Connectivity Client: One of the IT Departments in a Northern State Customer's requirement: The customer wanted to establish CAN connectivity (Campus Area Network) for

More information

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Simplifying the Data Center Network to Reduce Complexity and Improve Performance SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,

More information

Cloud-ready network architecture

Cloud-ready network architecture IBM Systems and Technology Thought Leadership White Paper May 2011 Cloud-ready network architecture 2 Cloud-ready network architecture Contents 3 High bandwidth with low latency 4 Converged communications

More information

Leased Line + Remote Dial-in connectivity

Leased Line + Remote Dial-in connectivity Leased Line + Remote Dial-in connectivity Client: One of the TELCO offices in a Southern state. The customer wanted to establish WAN Connectivity between central location and 10 remote locations. The customer

More information

FTOS: A Modular and Portable Switch/Router Operating System Optimized for Resiliency and Scalability

FTOS: A Modular and Portable Switch/Router Operating System Optimized for Resiliency and Scalability White PAPER FTOS: A Modular and Portable Switch/Router Operating System Optimized for Resiliency and Scalability Introduction As Ethernet switch/routers continue to scale in terms of link speed and port

More information

The High Performance Data Center: The Role of Ethernet in Consolidation and Virtualization

The High Performance Data Center: The Role of Ethernet in Consolidation and Virtualization White PAPER The High Performance Data Center: The Role of Ethernet in Consolidation and Virtualization Abstract While the 90s were about information, today is about answers. Answers drive our society.

More information

Juniper Networks QFabric: Scaling for the Modern Data Center

Juniper Networks QFabric: Scaling for the Modern Data Center Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications

More information

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options

More information

Gigabit Ethernet Design

Gigabit Ethernet Design Gigabit Ethernet Design Laura Jeanne Knapp Network Consultant 1-919-254-8801 laura@lauraknapp.com www.lauraknapp.com Tom Hadley Network Consultant 1-919-301-3052 tmhadley@us.ibm.com HSEdes_ 010 ed and

More information

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network Converging Data Center Applications onto a Single 10Gb/s Ethernet Network Explanation of Ethernet Alliance Demonstration at SC10 Contributing Companies: Amphenol, Broadcom, Brocade, CommScope, Cisco, Dell,

More information

Optimizing Data Center Networks for Cloud Computing

Optimizing Data Center Networks for Cloud Computing PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

More information

Enterasys Data Center Fabric

Enterasys Data Center Fabric TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable

More information

Optimizing Infrastructure Support For Storage Area Networks

Optimizing Infrastructure Support For Storage Area Networks Optimizing Infrastructure Support For Storage Area Networks December 2008 Optimizing infrastructure support for Storage Area Networks Mission critical IT systems increasingly rely on the ability to handle

More information

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.

More information

FCoE Deployment in a Virtualized Data Center

FCoE Deployment in a Virtualized Data Center FCoE Deployment in a irtualized Data Center Satheesh Nanniyur (satheesh.nanniyur@qlogic.com) Sr. Staff Product arketing anager QLogic Corporation All opinions expressed in this presentation are that of

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

VMware Virtual SAN 6.2 Network Design Guide

VMware Virtual SAN 6.2 Network Design Guide VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...

More information

Router Architectures

Router Architectures Router Architectures An overview of router architectures. Introduction What is a Packet Switch? Basic Architectural Components Some Example Packet Switches The Evolution of IP Routers 2 1 Router Components

More information

VERITAS Backup Exec 9.0 for Windows Servers

VERITAS Backup Exec 9.0 for Windows Servers WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS

More information

Introduction to Cloud Design Four Design Principals For IaaS

Introduction to Cloud Design Four Design Principals For IaaS WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud

More information

Cloud Computing Networks

Cloud Computing Networks Cloud Computing Networks q q q UNDERSTANDING CLOUD COMPUTING NETWORKS Understanding cloud computing networks Cloud computing network architecture often includes a flat, leaf-spine topology to reduce latency

More information

10 Gigabit Ethernet WAN PHY

10 Gigabit Ethernet WAN PHY White PAPER 10 Gigabit Ethernet WAN PHY Introduction The introduction of 10 Gigabit Ethernet (10 GbE) WAN PHY into the IP/Ethernet networking community has led to confusion over the applicability between

More information

Data Center Convergence. Ahmad Zamer, Brocade

Data Center Convergence. Ahmad Zamer, Brocade Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations

More information

A Tour of the Linux OpenFabrics Stack

A Tour of the Linux OpenFabrics Stack A Tour of the OpenFabrics Stack Johann George, QLogic June 2006 1 Overview Beyond Sockets Provides a common interface that allows applications to take advantage of the RDMA (Remote Direct Memory Access),

More information

Voice Over IP. MultiFlow 5048. IP Phone # 3071 Subnet # 10.100.24.0 Subnet Mask 255.255.255.0 IP address 10.100.24.171. Telephone.

Voice Over IP. MultiFlow 5048. IP Phone # 3071 Subnet # 10.100.24.0 Subnet Mask 255.255.255.0 IP address 10.100.24.171. Telephone. Anritsu Network Solutions Voice Over IP Application Note MultiFlow 5048 CALL Manager Serv # 10.100.27 255.255.2 IP address 10.100.27.4 OC-48 Link 255 255 25 IP add Introduction Voice communications over

More information

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable

More information

Connecting the Clouds

Connecting the Clouds Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server

More information

Windows 8 SMB 2.2 File Sharing Performance

Windows 8 SMB 2.2 File Sharing Performance Windows 8 SMB 2.2 File Sharing Performance Abstract This paper provides a preliminary analysis of the performance capabilities of the Server Message Block (SMB) 2.2 file sharing protocol with 10 gigabit

More information

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven

More information

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership June 4, 2012 Introduction As data centers are forced to accommodate rapidly growing volumes of information,

More information

Advanced Network Services Teaming

Advanced Network Services Teaming Advanced Network Services Teaming Advanced Network Services (ANS) Teaming, a feature of the Advanced Network Services component, lets you take advantage of multiple adapters in a system by grouping them

More information

LAN Switching and VLANs

LAN Switching and VLANs 26 CHAPTER Chapter Goals Understand the relationship of LAN switching to legacy internetworking devices such as bridges and routers. Understand the advantages of VLANs. Know the difference between access

More information

- Hubs vs. Switches vs. Routers -

- Hubs vs. Switches vs. Routers - 1 Layered Communication - Hubs vs. Switches vs. Routers - Network communication models are generally organized into layers. The OSI model specifically consists of seven layers, with each layer representing

More information

High Speed I/O Server Computing with InfiniBand

High Speed I/O Server Computing with InfiniBand High Speed I/O Server Computing with InfiniBand José Luís Gonçalves Dep. Informática, Universidade do Minho 4710-057 Braga, Portugal zeluis@ipb.pt Abstract: High-speed server computing heavily relies on

More information

Cut I/O Power and Cost while Boosting Blade Server Performance

Cut I/O Power and Cost while Boosting Blade Server Performance April 2009 Cut I/O Power and Cost while Boosting Blade Server Performance 1.0 Shifting Data Center Cost Structures... 1 1.1 The Need for More I/O Capacity... 1 1.2 Power Consumption-the Number 1 Problem...

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

Bioscience. Introduction. The Importance of the Network. Network Switching Requirements. Arista Technical Guide www.aristanetworks.

Bioscience. Introduction. The Importance of the Network. Network Switching Requirements. Arista Technical Guide www.aristanetworks. Introduction Over the past several years there has been in a shift within the biosciences research community, regarding the types of computer applications and infrastructures that are deployed to sequence,

More information