Upgrading Data Center Network Architecture to 10 Gigabit Ethernet



Similar documents
Overcoming Security Challenges to Virtualize Internet-facing Applications

Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Memory Sizing for Server Virtualization. White Paper Intel Information Technology Computer Manufacturing Server Virtualization

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Intel Platform and Big Data: Making big data work for you.

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

3G Converged-NICs A Platform for Server I/O to Converged Networks

Comparing Multi-Core Processors for Server Virtualization

FIBRE CHANNEL OVER ETHERNET

Implementing Cloud Storage Metrics to Improve IT Efficiency and Capacity Management

Intel IT s Data Center Strategy for Business Transformation

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Optimizing Infrastructure Support For Storage Area Networks

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization

High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology

Integrating Oracle's Exadata Database Machine with a Data Center LAN Using Oracle Ethernet Switch ES2-64 and ES2-72 ORACLE WHITE PAPER MARCH 2015

PCI Express* Ethernet Networking

Adopting Software-Defined Networking in the Enterprise

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Implementing and Expanding a Virtualized Environment

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

MIGRATING TO A 40 GBPS DATA CENTER

White Paper Solarflare High-Performance Computing (HPC) Applications

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

How To Make A Data Center More Efficient

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Virtualization with the Intel Xeon Processor 5500 Series: A Proof of Concept

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

SummitStack in the Data Center

Accelerating Data Compression with Intel Multi-Core Processors

Block based, file-based, combination. Component based, solution based

Intel RAID SSD Cache Controller RCS25ZB040

The Need for Low-Loss Multifiber Connectivity

Unified Computing Systems

How To Evaluate Netapp Ethernet Storage System For A Test Drive

OPTIMIZING SERVER VIRTUALIZATION

Leading Virtualization 2.0

New Trends Make 10 Gigabit Ethernet the Data-Center Performance Choice

Blade Switches Don t Cut It in a 10 Gig Data Center

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Unified Storage Networking

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

Sizing Server Platforms To Meet ERP Requirements

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Developing an Enterprise Client Virtualization Strategy

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

Private cloud computing advances

University of Arizona Increases Flexibility While Reducing Total Cost of Ownership with Cisco Nexus 5000 Series Switches

Sustaining the Cloud with a Faster, Greener and Uptime-Optimized Data Center

Enabling Device-Independent Mobility with Dynamic Virtual Clients

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Measuring Cache and Memory Latency and CPU to Memory Bandwidth

Virtualizing the SAN with Software Defined Storage Networks

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Data Center Architecture with Panduit, Intel, and Cisco

SUN STORAGETEK DUAL 8 Gb FIBRE CHANNEL DUAL GbE EXPRESSMODULE HOST BUS ADAPTER

Business-centric Storage for small and medium-sized enterprises. How ETERNUS DX powered by Intel Xeon processors improves data management

Volume and Velocity are Driving Advances in Data Center Network Technology

How To Write An Article On An Hp Appsystem For Spera Hana

Virtual Compute Appliance Frequently Asked Questions

Cloud-ready network architecture

Fiber Channel Over Ethernet (FCoE)

High Performance Computing and Big Data: The coming wave.

White Paper. Best Practices for 40 Gigabit Implementation in the Enterprise

Windows Server 2008 R2 Hyper-V Live Migration

The ABC of Direct Attach Cables

over Ethernet (FCoE) Dennis Martin President, Demartek

FC SAN Vision, Trends and Futures. LB-systems

Intel Ethernet SFP+ Optics

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

What Is Microsoft Private Cloud Fast Track?

HUAWEI Tecal E6000 Blade Server

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Transcription:

Intel IT IT Best Practices Data Centers January 2011 Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Executive Overview Upgrading our network architecture will optimize our data center infrastructure to respond faster to business needs while enhancing the services and value IT brings to the business. Matt Ammann Carlos Briceno Kevin Connell Sanjay Rungta Principal Engineer, Intel IT To accommodate the increasing demands that data center growth places on Intel s network, Intel IT is converting our data center network architecture to 10 gigabit Ethernet (GbE) connections. Existing 100 megabit per second (Mb/s) and 1 GbE connections no longer support Intel s growing business requirements. Our new 10 GbE data center fabric design will enable us to accommodate current growth and meet increasing network demand in the future. Intel IT is engaged in a verticalization strategy Reduced total cost of ownership in a that optimizes data center resources to meet virtualized environment. A 10 GbE fabric specific business requirements in different has the potential to reduce network cost computing areas. Data center trends in three in our virtualized environment by 18 to 25 of these areas drove our decision to upgrade, percent, mostly due to simplifying the LAN including server virtualization and consolidation and cable infrastructures. The new system in Office and Enterprise computing environments, and rapid growth in Design computing power, and cooling resources. also requires fewer data center space, applications and their performance requirements. Increased throughput. Faster connections Additionally, we experience 40 percent per year and reduced network latency provide growth in our Internet connection requirements. design engineers with faster workload While designing the new data center fabric, we completion times and improved productivity. tested several 10 GbE connection products and Increased agility. The network can easily chose those that offered the highest quality adapt to meet changing business needs and performance and reliability. We also balanced will enable us to meet future requirements, ideal design against cost considerations. such as additional storage capacity. The new data center fabric design provides Upgrading our network architecture will optimize many benefits: our data center infrastructure to respond faster Reduced data center complexity. As to business needs while enhancing the services virtualization increases, a 10 GbE network and value IT brings to the business. allows us to use fewer physical servers and switches.

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Contents Executive Overview... 1 Background... 2 Solution... 2 Simplifying Virtualization for Office and Enterprise Applications.. 3 Increasing Throughput and Reducing Latency for Design Applications... 4 Choosing the Right Network Components... 4 Future Plans for Office and Enterprise I/O and Storage Consolidation... 4 Conclusion... 5 Acronyms... 5 IT@INTEL The IT@Intel program connects IT professionals around the world with their peers inside our organization sharing lessons learned, methods and strategies. Our goal is simple: Share Intel IT best practices that create business value and make IT a competitive advantage. Visit us today at www.intel.com/it or contact your local Intel representative if you d like to learn more. BACKGROUND Intel s 97 data centers are at the center of a massive worldwide computing environment, occupying almost 460,000 square feet and housing approximately 100,000 servers. These servers support five main computing application areas, often referred to as DOMES : Design engineering, Office, Manufacturing, Enterprise, and Services. Intel s rapidly growing business requirements place increasing demands on data center resources. Intel IT is engaged in a verticalization strategy that examines each application area and provides technology solutions that meet specific business needs. We are also developing an Office and Enterprise private cloud, and we see opportunities to expand cloud computing to support manufacturing. These strategies, combined with the following significant trends in several computing application areas, compelled us to evaluate whether our existing 1 gigabit Ethernet (GbE) network infrastructure was sufficient to meet network infrastructure requirements. Large-scale virtualization in the Office and Enterprise computing domains. Increasing compute density in the Design computing domain. In addition, high-performance Intel processors and clustering technologies are rapidly increasing file server performance. This means that the network, not the file servers, is the limiting factor in supporting faster throughput. Our Internet connection requirements are growing 40 percent each year as well, which requires faster connectivity than is possible with a 1 GbE data center fabric. SOLUTION To meet these demands, we determined it was necessary to convert our data center network architecture from 100 megabit per second (Mb/s) and 1 GbE connections to 10 GbE connections. Our new 10 GbE data center fabric design will meet current needs while positioning us to incorporate new technology to meet future network requirements. For example, the 10 GbE network will simplify the virtualized host architecture used for Office and Enterprise computing, and will also reduce network component cost. For Design computing, the 10 GbE network will reduce latency and allow faster application response times without the expense associated with alternative low-latency systems such as InfiniBand*. And, although storage area networks (SANs) in the Office and Enterprise application areas currently use a separate Fibre Channel (FC) fabric, we anticipate that as 10 GbE technology matures, we will be able to consolidate the SANs onto the 10 GbE network thereby reducing network complexity even further. 2 www.intel.com/it

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet IT@Intel White Paper Simplifying Virtualization for Office and Enterprise Applications Intel s data center strategy for Office and Enterprise relies on virtualization and consolidation to reduce data center cost and power consumption, while reducing time to provision servers. Our current consolidation level is 12:1, and we are targeting a 20:1 consolidation level or greater with newer dualsocket servers based on Intel Xeon processors. As virtualization increases, so does the number of server connections. Using a 1 GbE network fabric, a single physical server on the LAN requires eight GbE LAN ports, as shown in Figure 1 (left). By converting to a 10 GbE data center design, we can simplify server connectivity by reducing the number of LAN ports from eight to two 10 GbE and one 1GbE. This significantly reduces cable and infrastructure complexity, also shown in Figure 1 (right). Presently, the SAN will continue to use FC connections and host bus adapters (HBAs). In addition to simplifying physical infra structure, a 10 GbE network fabric also has the potential to reduce the overall total cost of ownership (TCO) for LAN components per server by about 18.5 percent compared to the 1 GbE fabric, as shown in Table 1. Table 1. 10 Gigabit Ethernet (GbE) Cost Savings LAN Component Cable LAN (per Port) Server Storage Total Savings (per Server) 10 GbE Fabric Effect on Cost 48% Cost Reduction 50% Cost Reduction 12% Cost Increase 0% (Same for Both Fabrics) 18.5% Overall Cost Reduction Eight LAN Ports in a 1 Gigabit Ethernet Environment Three LAN Ports in a 10 Gigabit Ethernet Environment Storage Area Network and Backup and Restore Storage Area Network and Backup and Restore Two 4-GB Fibre Channel (FC) Connections; Host Bus Adapter (HBA) for FC SAN Connectivity Two 4-GB FC Connections; HBA for FC SAN Connectivity Host Host Virtual Machinen Virtual Machinen Backup Live Migration Production 1x Physical Network Interface Card 1x Physical Network Interface Card Service Console Remote Management Backup Trunked Live Migration Production Remote Management Figure 1. Implementing a 10 gigabit Ethernet data center fabric reduces the number of LAN ports for our virtualized host architecture from eight to three. www.intel.com/it 3

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Increasing Throughput and Reducing Latency for Design Applications For some specific silicon design workloads, we needed to build a small, very low-latency cluster between servers. Several parallel applications running on these servers typically carry very large packets. As shown in Table 2, we compared application response times, using several network fabric options. The 10 GbE network provided acceptable performance at an acceptable price point. For messages 16 megabytes (MB) and larger, the 10 GbE performance was about one-quarter of the 1 GbE response time, and was closer to the performance of InfiniBand, a more expensive low-latency switched fabric subsystem. In the table, multi-hop is defined as having to use more than one switch to get through the network. Choosing the Right Network Components We surveyed the market to find products that met our technical requirements at the right price point. We discovered that not all equipment and network cards are identical performance can vary. With extensive testing, we found that we could reduce the cost for a 10 GbE port by 65 percent by selecting the right architecture and supplier. For example, we had to decide where to place the network switch on top of the rack, which provides more flexibility but is more expensive, or in the center of the row. We chose to use center-of-the-row switches to reduce cost. Higher transmission speed requirements have led to new cable technologies, which we are deploying to optimize our implementation of a 10 GbE infrastructure: Small-form factor pluggable (SFP+) directattach cables. These twinaxial cables support 10 GbE connections over short distances of up to 7 meters. Some suppliers are producing a cable with a transmission capability of up to 15 meters. Connectorized cabling. We are using this technology to simplify cabling and reduce installation cost, because it is supported over SFP+ ports. One trunk cable that we use can support 10 GbE up to 90 meters and provides six individual connections. This reduces the amount of space required to support comparable densities by 66 percent. The trunks terminate on a variety of options, providing for a very flexible system. We also use a Multi-fiber Push-On (MPO) cable, which is a connectorized fiber technology comprised of multi-strand trunk bundles and cassettes. This technology can support 1 GbE and 10 GbE connections and can be upgraded easily to support 40 and 100 GbE parallel-optic connections by simply swapping a cassette. The current range for 10 GbE is 300 meters on Optical Multimode 3 (OM3) multi-mode fiber (MMF) and 10 kilometers on single-mode fiber (SMF). To maximize the supportable distances for 10 GbE, and 40 GbE/100 GbE when it arrives, we changed Intel s fiber standard to reflect a minimum of OM3 MMF and OM4 where possible, and we try to use more energy-efficient SFP+ ports. FUTURE PLANS FOR OFFICE AND ENTERPRISE I/O AND STORAGE CONSOLIDATION Historically, Ethernet s bandwidth limitations have kept it from being the fabric of choice for some application areas, such as I/O, storage, and interprocess communication (IPC). Consequently, we have used other fabrics to meet high-bandwidth, Table 2. Application Response Times for Various Packet Sizes Packet Size in Bytes 1 Gigabit Ethernet (GbE) Current Standard Application Response Time in Microseconds 10 GbE One-Hop 1 GbE InfiniBand* 8 69.78 62.50 41.78 15.57 128 75.41 62.55 44.77 17.85 1,024 116.99 62.56 64.52 32.25 4,096 165.24 65.28 103.15 60.15 16,384 257.41 62.47 195.87 168.57 32,768 414.52 129.48 348.55 271.95 65,536 699.25 162.30 627.12 477.93 131,072 1,252.90 302.15 1,182.41 883.83 Note: Intel internal measurements, June 2010. 4 www.intel.com/it

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet IT@Intel White Paper low-latency, no-drop needs, such as FC. The advent of 10 GbE is enabling us to converge all our network needs onto a single, flexible infrastructure. Several factors are responsible for increasing the I/O demand on our data centers. First, when more servers are added to the data center, it increases input/output operations per second (IOPS), which creates a proportional demand on the network. In addition, as each generation of processors becomes more complex, the amount of data associated with silicon design also increases significantly again, increasing network demand. Finally, systems with up to 512 gigabytes (GB) of memory are becoming more common, and these systems also need a high-speed network to read, write, and back up large amounts of data. Our upgrade to 10 GbE products will enable us to consolidate storage for Office and Enterprise applications while reducing our 10 GbE perport cost. When I/O convergence on Ethernet becomes a reality, multiple traffic types, such as LAN, storage, and IPC, can be consolidated onto a single, easy-to-use network fabric. We have conducted multiple phases of testing, and in the near future, these 10 GbE ports will be carrying multiple traffic types. CONCLUSION A high-performance 10 GbE data center infrastructure can simplify the virtualization of Office and Enterprise applications and reduce per-server TCO. In addition, 10 GbE s lower network latency and increased throughput performance can support our design teams high-density computing needs, improving design engineer productivity. Our analysis shows that for a virtualized environment, a 10 GbE infrastructure can reduce our network TCO by as much as 18 to 25 percent. And, for design applications, where low latency is required, 10 GbE can play a crucial role without requiring expensive lowlatency technology. The new fabric will also reduce data center complexity and increase our network s agility to meet Intel s growing data center needs. ACRONYMS GbE GB FC HBAs IOPS IPC Mb/s MB MMF MPO gigabit Ethernet gigabyte Fibre Channel host bus adapter input/output operations per second interprocess communication megabits per second megabytes multi-mode fiber Multi-fiber Push-On OM3 Optical Multimode 3 SAN SFP+ SMF TCO storage area network small form-factor pluggable single-mode fiber total cost of ownership For more information on Intel IT best practices, visit www.intel.com/it. This paper is for informational purposes only. THIS DOCUMENT IS PROVIDED AS IS WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF MERCHANTABILITY, NONINFRINGEMENT, FITNESS FOR ANY PARTICULAR PURPOSE, OR ANY WARRANTY OTHERWISE ARISING OUT OF ANY PROPOSAL, SPECIFICATION OR SAMPLE. Intel disclaims all liability, including liability for infringement of any proprietary rights, relating to use of information in this specification. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted herein. Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. * Other names and brands may be claimed as the property of others. Copyright 2011 Intel Corporation. All rights reserved. Printed in USA 0111/AC/KC/PDF Please Recycle 324159-001US